url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://math.stackexchange.com/questions/1176016/how-to-prove-the-set-distributive-law-for-n-sets
# How to prove the set distributive law for $n$ sets? How do you show that if you have sets $B_1, B_2, \cdots ,B_n$ and a set $C$, then $$(B_1\cap B_2, \cap \cdots B_n)\cup C= (B_1\cup C)\cap(B_2\cup C) \cap \cdots \cap (B_n\cup C)\,?$$ Thanks You have $$( B_1\cap B_2)\cup C = (B_1\cup C)\cap(B_2 \cup C).$$ Use an induction argument. • @tim29 Induction arguments have base cases and inductive steps. The base case is what ncmathsadist wrote in his answer. The inductive step involves assuming that your statement is true for $n=k$, and proving that it holds for $n=k+1$. The base case is the same as your equation if we set $n=2$. Mar 5 '15 at 5:24 Corollary of (ii) \begin{align*} (B_1\cap B_2\cap\cdots \cap B_n)\cup C&= \left(\bigcap_{i=1}^{n}B_i\right)\cup C\\ &=\bigcap_{i=1}^n(B_i\cup C)\\ &=(B_1\cup C)\cap(B_2\cup C)\cap\cdots\cap(B_n\cup C) \end{align*}
2022-01-20 05:16:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9644404053688049, "perplexity": 938.0885318350892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301720.45/warc/CC-MAIN-20220120035934-20220120065934-00653.warc.gz"}
http://docs.gl/es2/glFramebufferRenderbuffer
glFramebufferRenderbuffer Name glFramebufferRenderbuffer — attach a renderbuffer object to a framebuffer object C Specification void glFramebufferRenderbuffer( GLenum target, GLenum attachment, GLenum renderbuffertarget, GLuint renderbuffer); Parameters target Specifies the framebuffer target. The symbolic constant must be GL_FRAMEBUFFER. attachment Specifies the attachment point to which renderbuffer should be attached. Must be one of the following symbolic constants: GL_COLOR_ATTACHMENT0, GL_DEPTH_ATTACHMENT, or GL_STENCIL_ATTACHMENT. renderbuffertarget Specifies the renderbuffer target. The symbolic constant must be GL_RENDERBUFFER. renderbuffer Specifies the renderbuffer object that is to be attached. Description glFramebufferRenderbuffer attaches the renderbuffer specified by renderbuffer as one of the logical buffers of the currently bound framebuffer object. attachment specifies whether the renderbuffer should be attached to the framebuffer object's color, depth, or stencil buffer. A renderbuffer may not be attached to the default framebuffer object name 0. If renderbuffer is not 0, the value of GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE for the specified attachment point is set to GL_RENDERBUFFER and the value of GL_FRAMEBUFFER_ATTACHMENT_OBJECT_NAME is set to renderbuffer. GL_FRAMEBUFFER_ATTACHMENT_TEXTURE_LEVEL and GL_FRAMEBUFFER_ATTACHMENT_TEXTURE_CUBE_MAP_FACE are set to the default values 0 and GL_TEXTURE_CUBE_MAP_POSITIVE_X, respectively. Any previous attachment to the attachment logical buffer of the currently bound framebuffer object is broken. If renderbuffer is 0, the current image, if any, attached to the attachment logical buffer of the currently bound framebuffer object is detached. The value of GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE is set to GL_NONE. The value of GL_FRAMEBUFFER_ATTACHMENT_OBJECT_NAME is set to 0. GL_FRAMEBUFFER_ATTACHMENT_TEXTURE_LEVEL and GL_FRAMEBUFFER_ATTACHMENT_TEXTURE_CUBE_MAP_FACE are set to the default values 0 and GL_TEXTURE_CUBE_MAP_POSITIVE_X, respectively. Notes If a renderbuffer object is deleted while its image is attached to the currently bound framebuffer, then it is as if glFramebufferRenderbuffer had been called with a renderbuffer of 0 for the attachment point to which this image was attached in the currently bound framebuffer object. In other words, the renderbuffer image is detached from the currently bound framebuffer. Note that the renderbuffer image is specifically not detached from any non-bound framebuffers. Detaching the image from any non-bound framebuffers is the responsibility of the application. Errors GL_INVALID_ENUM is generated if target is not GL_FRAMEBUFFER. GL_INVALID_ENUM is generated if renderbuffertarget is not GL_RENDERBUFFER and renderbuffer is not 0. GL_INVALID_ENUM is generated if attachment is not an accepted attachment point. GL_INVALID_OPERATION is generated if the default framebuffer object name 0 is bound. GL_INVALID_OPERATION is generated if renderbuffer is neither 0 nor the name of an existing renderbuffer object. Associated Gets glGetFramebufferAttachmentParameteriv Examples Create a framebuffer object with a renderbuffer-based color attachment and a renderbuffer-based depth attachment. // fbo_width and fbo_height are the desired width and height of the FBO. // For Opengl <= 4.4 or if the GL_ARB_texture_non_power_of_two extension // is present, fbo_width and fbo_height can be values other than 2^n for // some integer n. // Build the texture that will serve as the color attachment for the framebuffer. GLuint color_renderbuffer; glGenRenderbuffers(1, &color_renderbuffer); glBindRenderbuffer( GL_RENDERBUFFER, (GLuint)color_renderbuffer ); glRenderbufferStorage( GL_RENDERBUFFER, GL_RGBA8, fbo_width, fbo_height ); glBindRenderbuffer( GL_RENDERBUFFER, 0 ); // Build the texture that will serve as the depth attachment for the framebuffer. GLuint depth_renderbuffer; glGenRenderbuffers(1, &depth_renderbuffer); glBindRenderbuffer( GL_RENDERBUFFER, (GLuint)depth_renderbuffer ); glRenderbufferStorage( GL_RENDERBUFFER, GL_DEPTH_COMPONENT, fbo_width, fbo_height ); glBindRenderbuffer( GL_RENDERBUFFER, 0 ); // Build the framebuffer. GLuint framebuffer; glGenFramebuffers(1, &framebuffer); glBindFramebuffer(GL_FRAMEBUFFER, (GLuint)framebuffer); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, color_renderbuffer); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth_renderbuffer); GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER); if (status != GL_FRAMEBUFFER_COMPLETE) // Error glBindFramebuffer(GL_FRAMEBUFFER, 0);
2018-11-20 11:32:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4553874433040619, "perplexity": 11788.5438456968}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746386.1/warc/CC-MAIN-20181120110505-20181120131750-00011.warc.gz"}
https://www.technicalfeeder.com/2022/04/how-to-determine-test-values-for-unit-testing/
How to determine test values for unit testing Are you a beginner at unit testing? Are you not confident enough to choose the test cases? You are not alone. Let’s learn how to determine the test cases to be more confident. The code written here is TypeScript. Why unit testing is necessary Why is unit testing necessary in the first place. It’s actually not necessary if we don’t have to maintain the application in the future. If it is a private project, it’s up to you whether to write unit tests or not. Normally, an application gradually gets bigger and bigger. It might be clear what to do at the beginning to add a new feature or change the behavior. However, we forget what we did before and why we implemented it in that way. There might be a reason why the code is so complicated but we don’t remember the detail. It is not easy to modify the code without knowing those things. Even if we determine that we refactor the code, we need to run the application and test all the features again whenever we make changes to the code if we don’t have any unit tests. It is time-consuming and we can’t guarantee that the application works as before. We have to do manual tests but some of them might not be done because the test is done by humans. Even though we have a test list, it might not be clear enough for a new tester to perform the same tests. Unit testing doesn’t cover all tests because it tests against “unit”. However, it definitely reduces the burden of the manual test. Since the unit tests are written in a programming language, there is no human interaction and thus the results are basically the same for each execution. The tests are executed by a command. We need to run the command manually to execute the existing tests during our development but all tests are executed once we run the command. It’s automated. We don’t need any preparation for the tests and the steps are exactly the same even though a new developer runs it. We can refactor the code confidently if unit tests are written correctly because it guarantees that the feature works exactly the same way if the tests pass. It means that the refactoring cycle will be faster and the code gets better than before at a decent speed. A developer should know how to write unit tests. Let’s say, the developers who don’t know how to write unit tests are beginners even if they have 10 years of experience. Production code without test code is called “Legacy code”. We should not write “legacy code”. Confirm if the test code is correct A developer, at least I, tends to think “the test code that I wrote works as expected”. If the test becomes a green state, we consider the test succeeded. This is not always true. Test code is also written by humans. It can contain mistakes. If the tests contain mistakes and thus it always returns a green state, it is hard to find the error because we tend to think the unit test is correct. Even if we luckily find the error, it can take a while to find it. The test was written a long time ago and it must be checked if it worked as expected. If the unit tests are not reliable, it doesn’t make sense to have them. Even worse, it gets us confused. Check if the test really fails when changing either a test value or an expected value. It might show a green state even if changing one of the values. You might think I don’t write such a test but unfortunately, it happens if the target code is a bit tangled… Some bad unit tests load a test data file and contain conditional clauses to determine which test to execute. The test is not performed if the data doesn’t match any conditions but the test result doesn’t show the truth. We should confirm if the added tests are really performed. What do we test against We know how a function behaves for specific input. If we give input X, the output must be Y. We should basically test all conditional cases because we know how it works. The test values should be boundary values. One condition Let’s consider the easiest case. The following function returns true if the input is 6 or bigger. function isBiggerThanFive(value: number) { return value > 5; } The boundary values are 5 and 6. We don’t have to input 1,2,3,4,5,6,7 because the function returns the same result for some of them. In this case, 5 and 6 are enough. Test value: 5, 6 Two conditions Let’s take a look at this second example that has two conditions. There are two boundaries in this case. function isBiggerThan5AndLessThan10(value: number) { return value > 5 && value < 10; } The range is short enough, so we can put test values like 3,4,5,6,7,8,9,10,11 but it is not nice. The boundary values are 5 and 6 for the small boundary, 9 and 10 for the big boundary. Test value: 5, 6, 9, 10 JavaScript function that requires an unknown argument Was it easy? Let’s go to the next function. This is JavaScript code that returns the average of the received array. function average(array){ let sum = 0; for(let i = 0; i < array.length; i++) { sum += array[i]; } return sum/array.length; } You probably come up with the following data or similar input. console.log(average([1,2,3,4,5])); // 3 Did you come up with the following data too? console.log(average(["1","2","3","4","5"])); // 2469 console.log(average([])); // NaN console.log(average([1, 2, "3", 4])); // 83.5 -> 334/4 Some of you might not come up with those cases. However, it can happen in reality because it doesn’t have any information about the data type. If it’s production code, it might have assertion code or comment for the type but we don’t know the received argument without that information. The function works as expected for normal cases. This is needless to say important and we must take care of them but we should also take care of error cases or unexpected input. Our application will be robust by considering error cases. It’s easy for everyone to find normal use cases. A better programmer finds further error values and fixes the bugs. Let’s try to find error cases as many as possible. Function requires an object that has mixed data types Once we complete practices to learn how to code, we create a function that is more complicated. I created a function that requires an object that has 3 properties. The definition of the arguments is as follows. export interface ScoreInput { original: string; userInput: string; time: number; } I’ve developed a simple typing game running on a console. After an example sentence is shown, a user types the same sentence. After that, the application calculates the score by checking the two sentences and the input time. The specification is something like this. • The max score is 10000 • The min score is 0 • The score is multiplied by the correct rate • The result is divided by the squared elapsed time (unit is second) • If the time is less than 1, set 1 to the time You can run the console application on your machine if you clone my repository from here. GitHub - yuto-yuto/unit-testing Contribute to yuto-yuto/unit-testing development by creating an account on GitHub. The implementation is the following. export function calculateScore(args: ScoreInput): number { let diffCount = 0; for (let i = 0; i < args.original.length; i++) { if (args.userInput[i] === undefined || args.userInput[i] === null) { diffCount++; continue; } if (args.original[i] !== args.userInput[i]) { diffCount++; } } if (args.userInput.length > args.original.length) { diffCount += args.userInput.length - args.original.length; } const calculateCorrectRate = () => { if (diffCount === 0) { return 1; } else if (args.original.length === 0 || args.original.length < diffCount) { return 0; } else { return (args.original.length - diffCount) / args.original.length; } }; const correctRate = calculateCorrectRate(); const time = args.time <= 1 ? 1 : Math.sqrt(args.time); return 10000 * correctRate / time; } What should we test for this function? We can have the following 8 cases for the string input. For the time value, I think 0.5, 1, 4 are enough. When time is 4, it will eventually be 2, so it’s easy to calculate. The actual test implementation becomes as follows. import "mocha"; import { expect } from "chai"; import { calculateScore } from "../../lib/typing-game/ScoreCalculator"; describe("ScoreCalculator", () => { describe("calculateScore", () => { context('original is empty', () => { it("should return 10000 when time is 1 and user input is empty", () => { const result = calculateScore({ original: "", userInput: "", time: 1, }); expect(result).to.equal(10000); }); it("should return 0 when time is 1 and user input is not empty", () => { const result = calculateScore({ original: "", userInput: "1", time: 1, }); expect(result).to.equal(0); }); }); context('when user input matches original', () => { [0.5, 1].forEach((time) => { it(should return 10000 when time is \${time}, () => { const result = calculateScore({ original: "input", userInput: "input", time, }); expect(result).to.equal(10000); }); }); it("should return 5000 when time is 4", () => { const result = calculateScore({ original: "input", userInput: "input", time: 4, }); expect(result).to.equal(5000); }); }); context('when user input does not match original', () => { context('when time is 1', () => { it("should return 5000 when half correct and the same length", () => { const result = calculateScore({ original: "abcd", userInput: "abdc", time: 1, }); expect(result).to.equal(5000); }); it("should return 5000 when userInput is half length", () => { const result = calculateScore({ original: "abcd", userInput: "ab", time: 1, }); expect(result).to.equal(5000); }); it("should return 0 when userInput is empty", () => { const result = calculateScore({ original: "abcd", userInput: "", time: 1, }); expect(result).to.equal(0); }); it("should return 0 when incorrect count is bigger than the original length", () => { const result = calculateScore({ original: "abcd", userInput: "xxxxyyyy", time: 1, }); expect(result).to.equal(0); }); it("should return 5000 when userInput has half length extra input", () => { const result = calculateScore({ original: "abcd", userInput: "abcd11", time: 1, }); expect(result).to.equal(5000); }); }); }); }); }); When I wrote the function above, I didn’t prepare the specification. I found some bugs while writing this article and fixed them. Even if I had the specification in advance, I think I couldn’t implement it without the bugs. For the first implementation, I didn’t consider the excessive input that the number of incorrect letters exceeds the original string length. In this case, the result becomes a negative value. In addition to that, I didn’t consider that a user finishes within 1 second. I think it doesn’t cause a real problem because it’s too hard to input everything within a second. But we should consider the case too. Those cases are often overlooked during the implementation. Doubt yourself to find new test cases. Technical Feeder
2022-09-26 07:12:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36282792687416077, "perplexity": 1416.3710345800114}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00538.warc.gz"}
http://residuetheorem.com/2014/04/
## Monthly Archives: April 2014 ### Generalizing an already tough integral I did the case $p=1$ here. The generalization to higher $p$ may involve higher-order derivatives as follows: \begin{align}K_p &= \int_0^{\pi/2} dx \frac{x^{2 p}}{1+\cos^2{x}} = \frac1{2^{4 p-1}} \int_{-\pi}^{\pi} dy \frac{y^{2 p}}{3+\cos{y}} \end{align} So define, as before, $$J(a) = \int_{-\pi}^{\pi} dy \frac{e^{i a y}}{3+\cos{y}}$$ Then $$K_p = \frac{(-1)^p}{2^{4 p-1}} \left [\frac{d^{2 p}}{da^{2 p}} J(a) \right ]_{a=0}$$ […]
2017-11-23 21:52:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998390674591064, "perplexity": 14779.841854205588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806979.99/warc/CC-MAIN-20171123214752-20171123234752-00663.warc.gz"}
http://jeremykun.com/category/primers/
# Martingales and the Optional Stopping Theorem This is a guest post by my colleague Adam Lelkes. The goal of this primer is to introduce an important and beautiful tool from probability theory, a model of fair betting games called martingales. In this post I will assume that the reader is familiar with the basics of probability theory. For those that need to refresh their knowledge, Jeremy’s excellent primers (1, 2) are a good place to start. ## The Geometric Distribution and the ABRACADABRA Problem Before we start playing with martingales, let’s start with an easy exercise. Consider the following experiment: we throw an ordinary die repeatedly until the first time a six appears. How many throws will this take in expectation? The reader might recognize immediately that this exercise can be easily solved using the basic properties of the geometric distribution, which models this experiment exactly. We have independent trials, every trial succeeding with some fixed probability $p$. If $X$ denotes the number of trials needed to get the first success, then clearly $\Pr(X = k) = (1-p)^{k-1} p$ (since first we need $k-1$ failures which occur independently with probability $1-p$, then we need one success which happens with probability $p$). Thus the expected value of $X$ is $\displaystyle E(X) = \sum_{k=1}^\infty k P(X = k) = \sum_{k=1}^\infty k (1-p)^{k-1} p = \frac1p$ by basic calculus. In particular, if success is defined as getting a six, then $p=1/6$ thus the expected time is $1/p=6$. Now let us move on to a somewhat similar, but more interesting and difficult problem, the ABRACADABRA problem. Here we need two things for our experiment, a monkey and a typewriter. The monkey is asked to start bashing random keys on a typewriter. For simplicity’s sake, we assume that the typewriter has exactly 26 keys corresponding to the 26 letters of the English alphabet and the monkey hits each key with equal probability. There is a famous theorem in probability, the infinite monkey theorem, that states that given infinite time, our monkey will almost surely type the complete works of William Shakespeare. Unfortunately, according to astronomists the sun will begin to die in a few billion years, and the expected time we need to wait until a monkey types the complete works of William Shakespeare is orders of magnitude longer, so it is not feasible to use monkeys to produce works of literature. So let’s scale down our goals, and let’s just wait until our monkey types the word ABRACADABRA. What is the expected time we need to wait until this happens? The reader’s first idea might be to use the geometric distribution again. ABRACADABRA is eleven letters long, the probability of getting one letter right is $\frac{1}{26}$, thus the probability of a random eleven-letter word being ABRACADABRA is exactly $\left(\frac{1}{26}\right)^{11}$. So if typing 11 letters is one trial, the expected number of trials is $\displaystyle \frac1{\left(\frac{1}{26}\right)^{11}}=26^{11}$ which means $11\cdot 26^{11}$ keystrokes, right? Well, not exactly. The problem is that we broke up our random string into eleven-letter blocks and waited until one block was ABRACADABRA. However, this word can start in the middle of a block. In other words, we considered a string a success only if the starting position of the word ABRACADABRA was divisible by 11. For example, FRZUNWRQXKLABRACADABRA would be recognized as success by this model but the same would not be true for AABRACADABRA. However, it is at least clear from this observation that $11\cdot 26^{11}$ is a strict upper bound for the expected waiting time. To find the exact solution, we need one very clever idea, which is the following: ## Let’s Open a Casino! Do I mean that abandoning our monkey and typewriter and investing our time and money in a casino is a better idea, at least in financial terms? This might indeed be the case, but here we will use a casino to determine the expected wait time for the ABRACADABRA problem. Unfortunately we won’t make any money along the way (in expectation) since our casino will be a fair one. Let’s do the following thought experiment: let’s open a casino next to our typewriter. Before each keystroke, a new gambler comes to our casino and bets $1 that the next letter will be A. If he loses, he goes home disappointed. If he wins, he bets all the money he won on the event that the next letter will be B. Again, if he loses, he goes home disappointed. (This won’t wreak havoc on his financial situation, though, as he only loses$1 of his own money.) If he wins again, he bets all the money on the event that the next letter will be R, and so on. If a gambler wins, how much does he win? We said that the casino would be fair, i.e. the expected outcome should be zero. That means that it the gambler bets $1, he should receive$26 if he wins, since the probability of getting the next letter right is exactly $\frac{1}{26}$ (thus the expected value of the change in the gambler’s fortune is $\frac{25}{26}\cdot (-1) + \frac{1}{26}\cdot (+25) = 0$. Let’s keep playing this game until the word ABRACADABRA first appears and let’s denote the number of keystrokes up to this time as $T$. As soon as we see this word, we close our casino. How much was the revenue of our casino then? Remember that before each keystroke, a new gambler comes in and bets $1, and if he wins, he will only bet the money he has received so far, so our revenue will be exactly $T$ dollars. How much will we have to pay for the winners? Note that the only winners in the last round are the players who bet on A. How many of them are there? There is one that just came in before the last keystroke and this was his first bet. He wins$26. There was one who came three keystrokes earlier and he made four successful bets (ABRA). He wins $\26^4$. Finally there is the luckiest gambler who went through the whole ABRACADABRA sequence, his prize will be $\26^{11}$. Thus our casino will have to give out $26^{11}+26^4+26$ dollars in total, which is just under the price of 200,000 WhatsApp acquisitions. Now we will make one crucial observation: even at the time when we close the casino, the casino is fair! Thus in expectation our expenses will be equal to our income. Our income is $T$ dollars, the expected value of our expenses is $26^{11}+26^4+26$ dollars, thus $E(T)=26^{11}+26^4+26$. A beautiful solution, isn’t it? So if our monkey types at 150 characters per minute on average, we will have to wait around 47 million years until we see ABRACADABRA. Oh well. ## Time to be More Formal After giving an intuitive outline of the solution, it is time to formalize the concepts that we used, to translate our fairy tales into mathematics. The mathematical model of the fair casino is called a martingale, named after a class of betting strategies that enjoyed popularity in 18th century France. The gambler’s fortune (or the casino’s, depending on our viewpoint) can be modeled with a sequence of random variables. $X_0$ will denote the gambler’s fortune before the game starts, $X_1$ the fortune after one round and so on. Such a sequence of random variables is called a stochastic process. We will require the expected value of the gambler’s fortune to be always finite. How can we formalize the fairness of the game? Fairness means that the gambler’s fortune does not change in expectation, i.e. the expected value of $X_n$, given $X_1, X_2, \ldots, X_{n-1}$ is the same as $X_{n-1}$. This can be written as $E(X_n | X_1, X_2, \ldots, X_{n-1}) = X_{n-1}$ or, equivalently, $E(X_n - X_{n-1} | X_1, X_2, \ldots, X_{n-1}) = 0$. The reader might be less comfortable with the first formulation. What does it mean, after all, that the conditional expected value of a random variable is another random variable? Shouldn’t the expected value be a number? The answer is that in order to have solid theoretical foundations for the definition of a martingale, we need a more sophisticated notion of conditional expectations. Such sophistication involves measure theory, which is outside the scope of this post. We will instead naively accept the definition above, and the reader can look up all the formal details in any serious probability text (such as [1]). Clearly the fair casino we constructed for the ABRACADABRA exercise is an example of a martingale. Another example is the simple symmetric random walk on the number line: we start at 0, toss a coin in each step, and move one step in the positive or negative direction based on the outcome of our coin toss. ## The Optional Stopping Theorem Remember that we closed our casino as soon as the word ABRACADABRA appeared and we claimed that our casino was also fair at that time. In mathematical language, the closed casino is called a stopped martingale. The stopped martingale is constructed as follows: we wait until our martingale X exhibits a certain behaviour (e.g. the word ABRACADABRA is typed by the monkey), and we define a new martingale X’ as follows: let $X'_n = X_n$ if $n < T$ and $X'_n = X_T$ if $n \ge T$ where $T$ denotes the stopping time, i.e. the time at which the desired event occurs. Notice that $T$ itself is a random variable. We require our stopping time $T$ to depend only on the past, i.e. that at any time we should be able to decide whether the event that we are waiting for has already happened or not (without looking into the future). This is a very reasonable requirement. If we could look into the future, we could obviously cheat by closing our casino just before some gambler would win a huge prize. We said that the expected wealth of the casino at the stopping time is the same as the initial wealth. This is guaranteed by Doob’s optional stopping theorem, which states that under certain conditions, the expected value of a martingale at the stopping time is equal to its expected initial value. Theorem: (Doob’s optional stopping theorem) Let $X_n$ be a martingale stopped at step $T$, and suppose one of the following three conditions hold: 1. The stopping time $T$ is almost surely bounded by some constant; 2. The stopping time $T$ is almost surely finite and every step of the stopped martingale $X_n$ is almost surely bounded by some constant; or 3. The expected stopping time $E(T)$ is finite and the absolute value of the martingale increments $|X_n-X_{n-1}|$ are almost surely bounded by a constant. Then $E(X_T) = E(X_0).$ We omit the proof because it requires measure theory, but the interested reader can see it in these notes. For applications, (1) and (2) are the trivial cases. In the ABRACADABRA problem, the third condition holds: the expected stopping time is finite (in fact, we showed using the geometric distribution that it is less than $26^{12}$) and the absolute value of a martingale increment is either 1 or a net payoff which is bounded by $26^{11}+26^4+26$. This shows that our solution is indeed correct. ## Gambler’s Ruin Another famous application of martingales is the gambler’s ruin problem. This problem models the following game: there are two players, the first player has $a$ dollars, the second player has $b$ dollars. In each round they toss a coin and the loser gives one dollar to the winner. The game ends when one of the players runs out of money. There are two obvious questions: (1) what is the probability that the first player wins and (2) how long will the game take in expectation? Let $X_n$ denote the change in the second player’s fortune, and set $X_0 = 0$. Let $T_k$ denote the first time $s$ when $X_s = k$. Then our first question can be formalized as trying to determine $\Pr(T_{-b} < T_a)$. Let $t = \min \{ T_{-b}, T_a\}$. Clearly $t$ is a stopping time. By the optional stopping theorem we have that $\displaystyle 0=E(X_0)=E(X_t)=-b\Pr(T_{-b} < T_a)+a(1-\Pr(T_{-b} < T_a))$ thus $\Pr(T_{-b} < T_a)=\frac{a}{a+b}$. I would like to ask the reader to try to answer the second question. It is a little bit trickier than the first one, though, so here is a hint: $X_n^2-n$ is also a martingale (prove it), and applying the optional stopping theorem to it leads to the answer. ## A Randomized Algorithm for 2-SAT The reader is probably familiar with 3-SAT, the first problem shown to be NP-complete. Recall that 3-SAT is the following problem: given a boolean formula in conjunctive normal form with at most three literals in each clause, decide whether there is a satisfying truth assignment. It is natural to ask if or why 3 is special, i.e. why don’t we work with $k$-SAT for some $k \ne 3$ instead? Clearly the hardness of the problem is monotone increasing in $k$ since $k$-SAT is a special case of $(k+1)$-SAT. On the other hand, SAT (without any bound on the number of literals per clause) is clearly in NP, thus 3-SAT is just as hard as $k$-SAT for any $k>3$. So the only question is: what can we say about 2-SAT? It turns out that 2-SAT is easier than satisfiability in general: 2-SAT is in P. There are many algorithms for solving 2-SAT. Here is one deterministic algorithm: associate a graph to the 2-SAT instance such that there is one vertex for each variable and each negated variable and the literals $x$ and $y$ are connected by a directed edge if there is a clause $(\bar x \lor y)$. Recall that $\bar x \lor y$ is equivalent to $x \implies y$, so the edges show the implications between the variables. Clearly the 2-SAT instance is not satisfiable if there is a variable x such that there are directed paths $x \to \bar x$ and $\bar x \to x$ (since $x \Leftrightarrow \bar x$ is always false). It can be shown that this is not only a sufficient but also a necessary condition for unsatisfiability, hence the 2-SAT instance is satisfiable if and only if there is are no such path. If there are directed paths from one vertex of a graph to another and vice versa then they are said to belong to the same strongly connected component. There are several graph algorithms for finding strongly connected components of directed graphs, the most well-known algorithms are all based on depth-first search. Now we give a very simple randomized algorithm for 2-SAT (due to Christos Papadimitriou in a ’91 paper): start with an arbitrary truth assignment and while there are unsatisfied clauses, pick one and flip the truth value of a random literal in it. Stop after $O(n^2)$ rounds where $n$ denotes the number of variables. Clearly if the formula is not satisfiable then nothing can go wrong, we will never find a satisfying truth assignment. If the formula is satisfiable, we want to argue that with high probability we will find a satisfying truth assignment in $O(n^2)$ steps. The idea of the proof is the following: fix an arbitrary satisfying truth assignment and consider the Hamming distance of our current assignment from it. The Hamming distance of two truth assignments (or in general, of two binary vectors) is the number of coordinates in which they differ. Since we flip one bit in every step, this Hamming distance changes by $\pm 1$ in every round. It also easy to see that in every step the distance is at least as likely to be decreased as to be increased (since we pick an unsatisfied clause, which means at least one of the two literals in the clause differs in value from the satisfying assignment). Thus this is an unfair “gambler’s ruin” problem where the gambler’s fortune is the Hamming distance from the solution, and it decreases with probability at least $\frac{1}{2}$. Such a stochastic process is called a supermartingale — and this is arguably a better model for real-life casinos. (If we flip the inequality, the stochastic process we get is called a submartingale.) Also, in this case the gambler’s fortune (the Hamming distance) cannot increase beyond $n$. We can also think of this process as a random walk on the set of integers: we start at some number and in each round we make one step to the left or to the right with some probability. If we use random walk terminology, 0 is called an absorbing barrier since we stop the process when we reach 0. The number $n$, on the other hand, is called a reflecting barrier: we cannot reach $n+1$, and whenever we get close we always bounce back. There is an equivalent version of the optimal stopping theorem for supermartingales and submartingales, where the conditions are the same but the consequence holds with an inequality instead of equality. It follows from the optional stopping theorem that the gambler will be ruined (i.e. a satisfying truth assignment will be found) in $O(n^2)$ steps with high probability. [1] For a reference on stochastic processes and martingales, see the text of Durrett . About these ads # (Finite) Fields — A Primer So far on this blog we’ve given some introductory notes on a few kinds of algebraic structures in mathematics (most notably groups and rings, but also monoids). Fields are the next natural step in the progression. If the reader is comfortable with rings, then a field is extremely simple to describe: they’re just commutative rings with 0 and 1, where every nonzero element has a multiplicative inverse. We’ll give a list of all of the properties that go into this “simple” definition in a moment, but an even more simple way to describe a field is as a place where “arithmetic makes sense.” That is, you get operations for $+,-, \cdot , /$ which satisfy the expected properties of addition, subtraction, multiplication, and division. So whatever the objects in your field are (and sometimes they are quite weird objects), they behave like usual numbers in a very concrete sense. So here’s the official definition of a field. We call a set $F$ a field if it is endowed with two binary operations addition ($+$) and multiplication ($\cdot$, or just symbol juxtaposition) that have the following properties: • There is an element we call 0 which is the identity for addition. • Addition is commutative and associative. • Every element $a \in F$ has a corresponding additive inverse $b$ (which may equal $a$) for which $a + b = 0$. These three properties are just the axioms of a (commutative) group, so we continue: • There is an element we call 1 (distinct from 0) which is the identity for multiplication. • Multiplication is commutative and associative. • Every nonzero element $a \in F$ has a corresponding multiplicative inverse $b$ (which may equal $a$) for which $ab = 1$. • Addition and multiplication distribute across each other as we expect. If we exclude the existence of multiplicative inverses, these properties make $F$ a commutative ring, and so we have the following chain of inclusions that describes it all $\displaystyle \textup{Fields} \subset \textup{Commutative Rings} \subset \textup{Rings} \subset \textup{Commutative Groups} \subset \textup{Groups}$ The standard examples of fields are the real numbers $\mathbb{R}$, the rationals $\mathbb{Q}$, and the complex numbers $\mathbb{C}$. But of course there are many many more. The first natural question to ask about fields is: what can they look like? For example, can there be any finite fields? A field $F$ which as a set has only finitely many elements? As we saw in our studies of groups and rings, the answer is yes! The simplest example is the set of integers modulo some prime $p$. We call them $\mathbb{Z} / p \mathbb{Z},$ or sometimes just $\mathbb{Z}/p$ for short, and let’s rederive what we know about them now. As a set, $\mathbb{Z}/p$ consists of the integers $\left \{ 0, 1, \dots, p-1 \right \}$. The addition and multiplication operations are easy to define, they’re just usual addition and multiplication followed by a modulus. That is, we add by $a + b \mod p$ and multiply with $ab \mod p$. This thing is clearly a commutative ring (because the integers form a commutative ring), so to show this is a field we need to show that everything has a multiplicative inverse. There is a nice fact that allows us to do this: an element $a$ has an inverse if and only if the only way for it to divide zero is the trivial way $0a = 0$. Here’s a proof. For one direction, suppose $a$ divides zero nontrivially, that is there is some $c \neq 0$ with $ac = 0$. Then if $a$ had an inverse $b$, then $0 = b(ac) = (ba)c = c$, but that’s very embarrassing for $c$ because it claimed to be nonzero. Now suppose $a$ only divides zero in the trivial way. Then look at all possible ways to multiply $a$ by other nonzero elements of $F$. No two can give you the same result because if $ax = ay$ then (without using multiplicative inverses) $a(x-y) = 0$, but we know that $a$ can only divide zero in the trivial way so $x=y$. In other words, the map “multiplication by $a$” is injective. Because the set of nonzero elements of $F$ is finite you have to hit everything (the map is in fact a bijection), and some $x$ will give you $ax = 1$. Now let’s use this fact on $\mathbb{Z}/p$ in the obvious way. Since $p$ is a prime, there are no two smaller numbers $a, b < p$ so that $ab = p$. But in $\mathbb{Z}/p$ the number $p$ is equivalent to zero (mod $p$)! So $\mathbb{Z}/p$ has no nontrivial zero divisors, and so every element has an inverse, and so it’s a finite field with $p$ elements. The next question is obvious: can we get finite fields of other sizes? The answer turns out to be yes, but you can get finite fields of any size. Let’s see why. ## Characteristics and Vector Spaces Say you have a finite field $k$ (lower-case k is the standard letter for a field, so let’s forget about $F$). Beacuse the field is finite, if you take 1 and keep adding it to itself you’ll eventually run out of field elements. That is, $n = 1 + 1 + \dots + 1 = 0$ at some point. How do I know it’s zero and doesn’t keep cycling never hitting zero? Well if at two points $n = m \neq 0$, then $n-m = 0$ is a time where you hit zero, contradicting the claim. Now we define $\textup{char}(k)$, the characteristic of $k$, to be the smallest $n$ (sums of 1 with itself) for which $n = 0$. If there is no such $n$ (this can happen if $k$ is infinite, but doesn’t always happen for infinite fields), then we say the characteristic is zero. It would probably make more sense to say the characteristic is infinite, but that’s just the way it is. Of course, for finite fields the characteristic is always positive. So what can we say about this number? We have seen lots of example where it’s prime, but is it always prime? It turns out the answer is yes! For if $ab = n = \textup{char}(k)$ is composite, then by the minimality of $n$ we get $a,b \neq 0$, but $ab = n = 0$. This can’t happen by our above observation, because being a zero divisor means you have no inverse! Contradiction, sucker. But it might happen that there are elements of $k$ that can’t be written as $1 + 1 + \dots + 1$ for any number of terms. We’ll construct examples in a minute (in fact, we’ll classify all finite fields), but we already have a lot of information about what those fields might look like. Indeed, since every field has 1 in it, we just showed that every finite field contains a smaller field (a subfield) of all the ways to add 1 to itself. Since the characteristic is prime, the subfield is a copy of $\mathbb{Z}/p$ for $p = \textup{char}(k)$. We call this special subfield the prime subfield of $k$. The relationship between the possible other elements of $k$ and the prime subfield is very neat. Because think about it: if $k$ is your field and $F$ is your prime subfield, then the elements of $k$ can interact with $F$ just like any other field elements. But if we separate $k$ from $F$ (make a separate copy of $F$), and just think of $k$ as having addition, then the relationship with $F$ is that of a vector space! In fact, whenever you have two fields $k \subset k'$, the latter has the structure of a vector space over the former. Back to finite fields, $k$ is a vector space over its prime subfield, and now we can impose all the power and might of linear algebra against it. What’s it’s dimension? Finite because $k$ is a finite set! Call the dimension $m$, then we get a basis $v_1, \dots, v_m$. Then the crucial part: every element of $k$ has a unique representation in terms of the basis. So they are expanded in the form $\displaystyle f_1v_1 + \dots + f_mv_m$ where the $f_i$ come from $F$. But now, since these are all just field operations, every possible choice for the $f_i$ has to give you a different field element. And how many choices are there for the $f_i$? Each one has exactly $|F| = \textup{char}(k) = p$. And so by counting we get that $k$ has $p^m$ many elements. This is getting exciting quickly, but we have to pace ourselves! This is a constraint on the possible size of a finite field, but can we realize it for all choices of $p, m$? The answer is again yes, and in the next section we’ll see how.  But reader be warned: the formal way to do it requires a little bit of familiarity with ideals in rings to understand the construction. I’ll try to avoid too much technical stuff, but if you don’t know what an ideal is, you should expect to get lost (it’s okay, that’s the nature of learning new math!). ## Constructing All Finite Fields Let’s describe a construction. Take a finite field $k$ of characteristic $p$, and say you want to make a field of size $p^m$. What we need to do is construct a field extension, that is, find a bigger field containing $k$ so that the vector space dimension of our new field over $k$ is exactly $m$. What you can do is first form the ring of polynomials with coefficients in $k$. This ring is usually denoted $k[x]$, and it’s easy to check it’s a ring (polynomial addition and multiplication are defined in the usual way). Now if I were speaking to a mathematician I would say, “From here you take an irreducible monic polynomial $p(x)$ of degree $m$, and quotient your ring by the principal ideal generated by $p$. The result is the field we want!” In less compact terms, the idea is exactly the same as modular arithmetic on integers. Instead of doing arithmetic with integers modulo some prime (an irreducible integer), we’re doing arithmetic with polynomials modulo some irreducible polynomial $p(x)$. Now you see the reason I used $p$ for a polynomial, to highlight the parallel thought process. What I mean by “modulo a polynomial” is that you divide some element $f$ in your ring by $p$ as much as you can, until the degree of the remainder is smaller than the degree of $p(x)$, and that’s the element of your quotient. The Euclidean algorithm guarantees that we can do this no matter what $k$ is (in the formal parlance, $k[x]$ is called a Euclidean domain for this very reason). In still other words, the “quotient structure” tells us that two polynomials $f, g \in k[x]$ are considered to be the same in $k[x] / p$ if and only if $f - g$ is divisible by $p$. This is actually the same definition for $\mathbb{Z}/p$, with polynomials replacing numbers, and if you haven’t already you can start to imagine why people decided to study rings in general. Let’s do a specific example to see what’s going on. Say we’re working with $k = \mathbb{Z}/3$ and we want to compute a field of size $27 = 3^3$. First we need to find a monic irreducible polynomial of degree $3$. For now, I just happen to know one: $p(x) = x^3 - x + 1$. In fact, we can check it’s irreducible, because to be reducible it would have to have a linear factor and hence a root in $\mathbb{Z}/3$. But it’s easy to see that if you compute $p(0), p(1), p(2)$ and take (mod 3) you never get zero. So I’m calling this new ring $\displaystyle \frac{\mathbb{Z}/3[x]}{(x^3 - x + 1)}$ It happens to be a field, and we can argue it with a whole lot of ring theory. First, we know an irreducible element of this ring is also prime (because the ring is a unique factorization domain), and prime elements generate maximal ideals (because it’s a principal ideal domain), and if you quotient by a maximal ideal you get a field (true of all rings). But if we want to avoid that kind of argument and just focus on this ring, we can explicitly construct inverses. Say you have a polynomial $f(x)$, and for illustration purposes we’ll choose $f(x) = x^4 + x^2 - 1$. Now in the quotient ring we could do polynomial long division to find remainders, but another trick is just to notice that the quotient is equivalent to the condition that $x^3 = x - 1$. So we can reduce $f(x)$ by applying this rule to $x^4 = x^3 x$ to get $\displaystyle f(x) = x^2 + x(x-1) - 1 = 2x^2 - x - 1$ Now what’s the inverse of $f(x)$? Well we need a polynomial $g(x) = ax^2 + bx + c$ whose product with $f$ gives us something which is equivalent to 1, after you reduce by $x^3 - x + 1$. A few minutes of algebra later and you’ll discover that this is equivalent to the following polynomial being identically 1 $\displaystyle (a-b+2c)x^2 + (-3a+b-c)x + (a - 2b - 2c) = 1$ In other words, we get a system of linear equations which we need to solve: \displaystyle \begin{aligned} a & - & b & + & 2c & = 0 \\ -3a & + & b & - & c &= 0 \\ a & - & 2b & - & 2c &= 1 \end{aligned} And from here you can solve with your favorite linear algebra techniques. This is a good exercise for working in fields, because you get to abuse the prime subfield being characteristic 3 to say terrifying things like $-1 = 2$ and $6b = 0$. The end result is that the inverse polynomial is $2x^2 + x + 1$, and if you were really determined you could write a program to compute these linear systems for any input polynomial and ensure they’re all solvable. We prefer the ring theoretic proof. In any case, it’s clear that taking a polynomial ring like this and quotienting by a monic irreducible polynomial gives you a field. We just control the size of that field by choosing the degree of the irreducible polynomial to our satisfaction. And that’s how we get all finite fields! ## One Last Word on Irreducible Polynomials One thing we’ve avoided is the question of why irreducible monic polynomials exist of all possible degrees $m$ over any $\mathbb{Z}/p$ (and as a consequence we can actually construct finite fields of all possible sizes). The answer requires a bit of group theory to prove this, but it turns out that the polynomial $x^{p^m} - x$ has all degree $m$ monic irreducible polynomials as factors. But perhaps a better question (for computer scientists) is how do we work over a finite field in practice? One way is to work with polynomial arithmetic as we described above, but this has some downsides: it requires us to compute these irreducible monic polynomials (which doesn’t sound so hard, maybe), to do polynomial long division every time we add, subtract, or multiply, and to compute inverses by solving a linear system. But we can do better for some special finite fields, say where the characteristic is 2 (smells like binary) or we’re only looking at $F_{p^2}$. The benefit there is that we aren’t forced to use polynomials. We can come up with some other kind of structure (say, matrices of a special form) which happens to have the same field structure and makes computing operations relatively painless. We’ll see how this is done in the future, and see it applied to cryptography when we continue with our series on elliptic curve cryptography. Until then! # How to Conquer Tensorphobia A professor at Stanford once said, If you really want to impress your friends and confound your enemies, you can invoke tensor products… People run in terror from the $\otimes$ symbol. He was explaining some aspects of multidimensional Fourier transforms, but this comment is only half in jest; people get confused by tensor products. It’s often for good reason. People who really understand tensors feel obligated to explain it using abstract language (specifically, universal properties). And the people who explain it in elementary terms don’t really understand tensors. This post is an attempt to bridge the gap between the elementary and advanced understandings of tensors. We’ll start with the elementary (axiomatic) approach, just to get a good feel for the objects we’re working with and their essential properties. Then we’ll transition to the “universal” mode of thought, with the express purpose of enlightening us as to why the properties are both necessary and natural. But above all, we intend to be sufficiently friendly so as to not make anybody run in fear. This means lots of examples and preferring words over symbols. Unfortunately, we simply can’t get by without the reader knowing the very basics of linear algebra (the content of our first two primers on linear algebra (1) (2), though the only important part of the second is the definition of an inner product). So let’s begin. ## Tensors as a Bunch of Axioms Before we get into the thick of things I should clarify some basic terminology. Tensors are just vectors in a special vector space. We’ll see that such a vector space comes about by combining two smaller vector spaces via a tensor product. So the tensor product is an operation combining vector spaces, and tensors are the elements of the resulting vector space. Now the use of the word product is quite suggestive, and it may lead one to think that a tensor product is similar or related to the usual direct product of vector spaces. In fact they are related (in very precise sense), but they are far from similar. If you were pressed, however, you could start with the direct product of two vector spaces and take a mathematical machete to it until it’s so disfigured that you have to give it a new name (the tensor product). With that image in mind let’s see how that is done. For the sake of generality we’ll talk about two arbitrary finite-dimensional vector spaces $V, W$ of dimensions $n, m$. Recall that the direct product  $V \times W$ is the vector space of pairs $(v,w)$ where $v$ comes from $V$ and $w$ from $W$. Recall that addition in this vector space is defined componentwise ($(v_1,w_1) + (v_2, w_2) = (v_1 + v_2, w_1 + w_2$)) and scalar multiplication scales both components $\lambda (v,w) = (\lambda v, \lambda w)$. To get the tensor product space $V \otimes W$, we make the following modifications. First, we redefine what it means to do scalar multiplication. In this brave new tensor world, scalar multiplication of the whole vector-pair is declared to be the same as scalar multiplication of any component you want. In symbols, $\displaystyle \lambda (v, w) = (\lambda v, w) = (v, \lambda w)$ for all choices of scalars $\lambda$ and vectors $v, w$. Second, we change the addition operation so that it only works if one of the two components are the same. In symbols, we declare that $(v, w) + (v', w) = (v + v', w)$ only works because $w$ is the same in both pieces, and with the same rule applying if we switch the positions of $v,w$ above. All other additions are simply declared to be new vectors. I.e. $(x,y) + (z,w)$ is simply itself. It’s a valid addition — we need to be able to add stuff to be a vector space — but you just can’t combine it any further unless you can use the scalar multiplication to factor out some things so that $y=w$ or $x=z$. To say it still one more time, a general element of the tensor $V \otimes W$ is a sum of these pairs that can or can’t be combined by addition (in general things can’t always be combined). Finally, we rename the pair $(v,w)$ to $v \otimes w$, to distinguish it from the old vector space $V \times W$ that we’ve totally butchered and reanimated, and we call the tensor product space as a whole $V \otimes W$. Those familiar with this kind of abstract algebra will recognize quotient spaces at work here, but we won’t use that language except to note that we cover quotients and free spaces elsewhere on this blog, and that’s the formality we’re ignoring. As an example, say we’re taking the tensor product of two copies of $\mathbb{R}$. This means that our space $\mathbb{R} \otimes \mathbb{R}$ is comprised of vectors like $3 \otimes 5$, and moreover that the following operations are completely legitimate. $3 \otimes 5 + 1 \otimes (-5) = 3 \otimes 5 + (-1) \otimes 5 = 2 \otimes 5$ $6 \otimes 1 + 3\pi \otimes \pi = 3 \otimes 2 + 3 \otimes \pi^2 = 3 \otimes (2 + \pi^2)$ Cool. This seemingly innocuous change clearly has huge implications on the structure of the space. We’ll get to specifics about how different tensors are from regular products later in this post, but for now we haven’t even proved this thing is a vector space. It might not be obvious, but if you go and do the formalities and write the thing as a quotient of a free vector space (as we mentioned we wouldn’t do) then you know that quotients of vector spaces are again vector spaces. So we get that one for free. But even without that it should be pretty obvious: we’re essentially just declaring that all the axioms of a vector space hold when we want them to. So if you were wondering whether $\lambda (a \otimes b + c \otimes d) = \lambda(a \otimes b) + \lambda(c \otimes d)$ The answer is yes, by force of will. So just to recall, the axioms of a tensor space $V \otimes W$ are 1. The “basic” vectors are $v \otimes w$ for $v \in V, w \in W$, and they’re used to build up all other vectors. 2. Addition is symbolic, unless one of the components is the same in both addends, in which case $(v_1, w) + (v_2, w) = (v_1+ v_2, w)$ and $(v, w_1) + (v,w_2) = (v, w_1 + w_2)$. 3. You can freely move scalar multiples around the components of $v \otimes w$. 4. The rest of the vector space axioms (distributivity, additive inverses, etc) are assumed with extreme prejudice. Naturally, one can extend this definition to $n$-fold tensor products, like $V_1 \otimes V_2 \otimes \dots \otimes V_d$. Here we write the vectors as sums of things like $v_1 \otimes \dots \otimes v_d$, and we enforce that addition can only be combined if all but one coordinates are the same in the addends, and scalar multiples move around to all coordinates equally freely. ## So where does it come from?! By now we have this definition and we can play with tensors, but any sane mathematically minded person would protest, “What the hell would cause anyone to come up with such a definition? I thought mathematics was supposed to be elegant!” It’s an understandable position, but let me now try to convince you that tensor products are very natural. The main intrinsic motivation for the rest of this section will be this: We have all these interesting mathematical objects, but over the years we have discovered that the maps between objects are the truly interesting things. A fair warning: although we’ll maintain a gradual pace and informal language in what follows, by the end of this section you’ll be reading more or less mature 20th-century mathematics. It’s quite alright to stop with the elementary understanding (and skip to the last section for some cool notes about computing), but we trust that the intrepid readers will push on. So with that understanding we turn to multilinear maps. Of course, the first substantive thing we study in linear algebra is the notion of a linear map between vector spaces. That is, a map $f: V \to W$ that factors through addition and scalar multiplication (i.e. $f(v + v') = f(v) + f(v')$ and $f(\lambda v) = \lambda f(v)$). But it turns out that lots of maps we work with have much stronger properties worth studying. For example, if we think of matrix multiplication as an operation, call it $m$, then $m$ takes in two matrices and spits out their product $m(A,B) = AB$ Now what would be an appropriate notion of linearity for this map? Certainly it is linear in the first coordinate, because if we fix $B$ then $m(A+C, B) = (A+C)B = AB + CB = m(A,B) + m(C,B)$ And for the same reason it’s linear in the second coordinate. But it is most definitely not linear in both coordinates simultaneously. In other words, $m(A+B, C+D) = (A+B)(C+D) = AC + AD + BC + BD \neq AC + BD = m(A,C) + m(B,D)$ In fact, if the only operation satisfying linearity in its two coordinates separately and also this kind of linearity is the zero map! (Try to prove this as an exercise.) So the strongest kind of linearity we could reasonably impose is that $m$ is linear in each coordinate when all else is fixed. Note that this property allows us to shift around scalar multiples, too. For example, $\displaystyle m(\lambda A, B) = \lambda AB = A (\lambda B) = m(A, \lambda B) = \lambda m(A,B)$ Starting to see the wispy strands of a connection to tensors? Good, but hold it in for a bit longer. This single-coordinate-wise-linear property is called bilinearity when we only have two coordinates, and multilinearity when we have more. Here are some examples of nice multilinear maps that show up everywhere: • If $V$ is an inner product space over $\mathbb{R}$, then the inner product is bilinear. • The determinant of a matrix is a multilinear map if we view the columns of the matrix as vector arguments. • The cross product of vectors in $\mathbb{R}^3$ is bilinear. There are many other examples, but you should have at least passing familiarity with these notions, and it’s enough to convince us that multilinearity is worth studying abstractly. And so what tensors do is give a sort of classification of multilinear maps. The idea is that every multilinear map $f$ from a product vector space $U_1 \times \dots \times U_d$ to any vector space $Y$ can be written first as a multilinear map to the tensor space $\displaystyle \alpha : U_1 \times \dots \times U_d \to U_1 \otimes \dots \otimes U_d$ Followed by a linear map to $Y$, $\displaystyle \hat{f} : U_1 \otimes \dots \otimes U_d \to Y$ And the important part is that $\alpha$ doesn’t depend on the original $f$ (but $\hat{f}$ does). One usually draws this as a single diagram: And to say this diagram commutes is to say that all possible ways to get from one point to another are equivalent (the compositions of the corresponding maps you follow are equal, i.e. $f = \hat{f} \alpha$). In fuzzy words, the tensor product is like the gatekeeper of all multilinear maps, and $\alpha$ is the gate. Yet another way to say this is that $\alpha$ is the most general possible multilinear map that can be constructed from $U_1 \times \dots \times U_d$. Moreover, the tensor product itself is uniquely defined by having a “most-general” $\alpha$ (up to isomorphism). This notion is often referred to by mathematicians as the “universal property” of the tensor product. And they might say something like “the tensor product is initial with respect to multilinear mappings from the standard product.” We discuss language like this in detail in this blog’s series on category theory, but it’s essentially a super-compact (and almost too vague) way of saying what the diagram says. Let’s explore this definition when we specialize to a tensor of two vector spaces, and it will give us a good understanding of $\alpha$ (which is really incredibly simple, but people like to muck it up with choices of coordinates and summations). So fix $V, W$ as vector spaces and look at the diagram What is $\alpha$ in this case? Well it just sends $(v,w) \mapsto v \otimes w$. Is this map multilinear? Well if we fix $w$ then $\displaystyle \alpha(v_1 + v_2, w) = (v_1 + v_2) \otimes w = v_1 \otimes w + v_2 \otimes w = \alpha(v_1, w) + \alpha (v_2, w)$ and $\displaystyle \alpha(\lambda v, w) = (\lambda v) \otimes w = (\lambda) (v \otimes w) = \lambda \alpha(v,w)$ And our familiarity with tensors now tells us that the other side holds too. Actually, rather than say this is a result of our “familiarity with tensors,” the truth is that this is how we know that we need to define the properties of tensors as we did. It’s all because we designed tensors to be the gatekeepers of multilinear maps! So now let’s prove that all maps $f : V \times W \to Y$ can be decomposed into an $\alpha$ part and a $\hat{f}$ part. To do this we need to know what data uniquely defines a multilinear map. For usual linear maps, all we had to do was define the effect of the map on each element of a basis (the rest was uniquely determined by the linearity property). We know what a basis of $V \times W$ is, it’s just the union of the bases of the pieces. Say that $V$ has a basis $v_1, \dots, v_n$ and $W$ has $w_1, \dots, w_m$, then a basis for the product is just $((v_1, 0), \dots, (v_n,0), (0,w_1), \dots, (0,w_m))$. But multilinear maps are more nuanced, because they have two arguments. In order to say “what they do on a basis” we really need to know how they act on all possible pairs of basis elements. For how else could we determine $f(v_1 + v_2, w_1)$? If there are $n$ of the $v_i$‘s and $m$ of the $w_i$‘s, then there are $nm$ such pairs $f(v_i, w_j)$. Uncoincidentally, as $V \otimes W$ is a vector space, its basis can also be constructed in terms of the bases of $V$ and $W$. You simply take all possible tensors $v_i \otimes w_j$. Since every $v \in V, w \in W$ can be written in terms of their bases, it’s clear than any tensor $\sum_{k} a_k \otimes b_k$ can also be written in terms of the basis tensors $v_i \otimes w_j$ (by simply expanding each $a_k, b_k$ in terms of their respective bases, and getting a larger sum of more basic tensors). Just to drive this point home, if $(e_1, e_2, e_3)$ is a basis for $\mathbb{R}^3$, and $(g_1, g_2)$ a basis for $\mathbb{R}^2$, then the tensor space $\mathbb{R}^3 \otimes \mathbb{R}^2$ has basis $(e_1 \otimes g_1, e_1 \otimes g_2, e_2 \otimes g_1, e_2 \otimes g_2, e_3 \otimes g_1, e_3 \otimes g_2)$ It’s a theorem that finite-dimensional vector spaces of equal dimension are isomorphic, so the length of this basis (6) tells us that $\mathbb{R}^3 \otimes \mathbb{R}^2 \cong \mathbb{R}^6$. So fine, back to decomposing $f$. All we have left to do is use the data given by $f$ (the effect on pairs of basis elements) to define $\hat{f} : V \otimes W \to Y$. The definition is rather straightforward, as we have already made the suggestive move of showing that the basis for the tensor space ($v_i \otimes w_j$) and the definition of $f$ ($f(v_i, w_j)$) are essentially the same. That is, just take $\hat{f}(v_i \otimes w_j) = f(v_i, w_j)$. Note that this is just defined on the basis elements, and so we extend to all other vectors in the tensor space by imposing linearity (defining $\hat{f}$ to split across sums of tensors as needed). Is this well defined? Well, multilinearity of $f$ forces it to be so. For if we had two equal tensors, say, $\lambda v \otimes w = v \otimes \lambda w$, then we know that $f$ has to respect their equality, because $f(\lambda v_i, w_j) = f(v_i, \lambda w_j)$, so $\hat{f}$ will take the same value on equal tensors regardless of which representative we pick (where we decide to put the $\lambda$). The same idea works for sums, so everything checks out, and $f(v,w)$ is equal to $\hat{f} \alpha$, as desired. Moreover, we didn’t make any choices in constructing $\hat{f}$. If you retrace our steps in the argument, you’ll see that everything was essentially decided for us once we fixed a choice of a basis (by our wise decisions in defining $V \otimes W$). Since the construction would be isomorphic if we changed the basis, our choice of $\hat{f}$ is unique. There is a lot more to say about tensors, and indeed there are some other useful ways to think about tensors that we’ve completely ignored. But this discussion should make it clear why we define tensors the way we do. Hopefully it eliminates most of the mystery in tensors, although there is still a lot of mystery in trying to compute stuff using tensors. So we’ll wrap up this post with a short discussion about that. ## Computability and Stuff It should be clear by now that plain product spaces $V \times W$ and tensor product spaces $V \otimes W$ are extremely different. In fact, they’re only related in that their underlying sets of vectors are built from pairs of vectors in $V$ and $W$. Avid readers of this blog will also know that operations involving matrices (like row reduction, eigenvalue computations, etc.) are generally efficient, or at least they run in polynomial time so they’re not crazy impractically slow for modest inputs. On the other hand, it turns out that almost every question you might want to ask about tensors is difficult to answer computationally. As with the definition of the tensor product, this is no mere coincidence. There is something deep going on with tensors, and it has serious implications regarding quantum computing. More on that in a future post, but for now let’s just focus on one hard problem to answer for tensors. As you know, the most general way to write an element of a tensor space $U_1 \otimes \dots \otimes U_d$ is as a sum of the basic-looking tensors. $\displaystyle \sum_{k} a_{1,k} \otimes a_{2,k} \otimes \dots \otimes a_{d,k}$ where the $a_{i,k}$ may be sums of vectors from $U_i$ themselves. But as we saw with our examples over $\mathbb{R}$, there can be lots of different ways to write a tensor. If you’re lucky, you can write the entire tensor as a one-term sum, that is just a tensor $a \otimes b$. If you can do this we call the tensor a pure tensor, or a rank 1 tensor. We then have the following natural definition and problem: Definition: The rank of a tensor $x \in U_1 \otimes \dots \otimes U_d$ is the minimum number of terms in any representation of $x$ as a sum of pure tensors. The one exception is the zero element, which has rank zero by convention. Problem: Given a tensor $x \in k^{n_1} \otimes k^{n_2} \otimes k^{n_3}$ where $k$ is a field, compute its rank. Of course this isn’t possible in standard computing models unless you can represent the elements of the field (and hence the elements of the vector space in question) in a computer program. So we restrict $k$ to be either the rational numbers $\mathbb{Q}$ or a finite field $\mathbb{F}_{q}$. Even though the problem is simple to state, it was proved in 1990 (a result of Johan Håstad) that tensor rank is hard to compute. Specifically, the theorem is that Theorem: Computing tensor rank is NP-hard when $k = \mathbb{Q}$ and NP-complete when $k$ is a finite field. The details are given in Håstad’s paper, but the important work that followed essentially showed that most problems involving tensors are hard to compute (many of them by reduction from computing rank). This is unfortunate, but also displays the power of tensors. In fact, tensors are so powerful that many believe understanding them better will lead to insight in some very important problems, like finding faster matrix multiplication algorithms or proving circuit lower bounds (which is closely related to P vs NP). Finding low-rank tensor approximations is also a key technique in a lot of recent machine learning and data mining algorithms. With this in mind, the enterprising reader will probably agree that understanding tensors is both valuable and useful. In the future of this blog we’ll hope to see some of these techniques, but at the very least we’ll see the return of tensors when we delve into quantum computing. Until next time! # Lagrangians for the Amnesiac For a while I’ve been meaning to do some more advanced posts on optimization problems of all flavors. One technique that comes up over and over again is Lagrange multipliers, so this post is going to be a leisurely reminder of that technique. I often forget how to do these basic calculus-type things, so it’s good practice. We will assume something about the reader’s knowledge, but it’s a short list: know how to operate with vectors and the dot product, know how to take a partial derivative, and know that in single-variable calculus the local maxima and minima of a differentiable function $f(x)$ occur when the derivative $f'(x)$ vanishes. All of the functions we’ll work with in this post will have infinitely many derivatives (i.e. smooth). So things will be nice. ## The Gradient The gradient of a multivariable function is the natural extension of the derivative of a single-variable function. If $f(x_1, \dots, x_n)$ is a differentiable function, the data of the gradient of $f$ consists of all of the partial derivatives $\partial f / \partial x_i$. It’s usually written as a vector $\displaystyle \nabla f = \left ( \frac{\partial f}{\partial x_1}, \dots, \frac{\partial f}{\partial x_n} \right )$ To make things easier for ourselves, we’ll just call $f$ a function $f(x)$ and understand $x$ to be a vector in $\mathbb{R}^n$. We can also think of $\nabla f$ as a function which takes in vectors and spits out vectors, by plugging in the input vector into each $\partial f / \partial x_i$. And the reason we do this is because it lets us describe the derivative of $f$ at a point as a linear map based on the gradient. That is, if we want to know how fast $f$ is growing along a particular vector $v$ and at a particular point $(x, f(x))$, we can just take a dot product of $v$ with $\nabla f(x)$. I like to call dot products inner products, and use the notation $\left \langle \nabla f(x), v \right \rangle$. Here $v$ is a vector in $\mathbb{R}^n$ which we think of as “tangent vectors” to the surface defined by $f$. And if we scale $v$ bigger or smaller, the value of the derivative scales with it (of course, because the derivative is a linear map!). Usually we use unit vectors to represent directions, but there’s no reason we have to. Calculus textbooks often require this to define a “directional derivative,” but perhaps it is better to understand the linear algebra over memorizing these arbitrary choices. For example, let $f(x,y,z) = xyz$. Then $\nabla f = (yz, xz, xy)$, and $\nabla f(1,2,1) = (2, 1, 2)$. Now if we pick a vector to go along, say, $v = (0,-1,1)$, we get the derivative of $f$ along $v$ is $\left \langle (2,1,2), (0,-1,1) \right \rangle = 1$. As importantly as computing derivatives is finding where the derivative is zero, and the geometry of the gradient can help us here. Specifically, if we think of our function $f$ as a surface sitting in $\mathbb{R}^{n+1}$ (as in the picture below), it’s not hard to see that the gradient vector points in the direction of steepest ascent of $f$. How do we know this? Well if you fix a point $(x_1, \dots, x_n)$ and you’re forced to use a vector $v$ of the same magnitude as $\nabla f(x)$, how can you maximize the inner product $\left \langle \nabla f(x), v \right \rangle$? Well, you just pick $v$ to be equal to $\nabla f(x)$, of course! This will turn the dot product into the square norm of $\nabla f(x)$. The gradient points in the direction of steepest ascent. (image source) More generally, the operation of an inner product $\left \langle -, v \right \rangle$ is geometrically the size of the projection of the argument onto $v$ (scaled by the size of $v$), and projections of a vector $w$ onto different directions than $w$ can only be smaller in magnitude than $w$. Another way to see this is to know the “alternative” formula for the dot product $\displaystyle \left \langle v,w \right \rangle = \left \| v \right \| \left \| w \right \| \cos(\theta)$ where $\theta$ is the angle between the vectors (in $\mathbb{R}^n$). We might not know how to get that angle, and in this post we don’t care, but we do know that $\cos(\theta)$ is between -1 and 1. And so if $v$ is fixed and we can’t change the norm of $w$ but only its direction, we will maximize the dot product when the two vectors point in the same direction, when $\theta$ is zero. All of this is just to say that the gradient at a point can be interpreted as having a specific direction. It’s the direction of steepest ascent of the surface $f$, and it’s size tells you how steep $f$ is at that point. The opposite direction is the direction of steepest descent, and the orthogonal directions (when $\theta = \pi /2$) have derivative zero. Now what happens if we’re at a local minimum or maximum? Well it’s necessary that $f$ is flat, and so by our discussion above the derivatives in all directions must be zero. It’s a basic linear algebra proof to show that this means the gradient is the zero vector. You can prove this by asking what sorts of vectors $w$ have a dot product of zero with all other vectors $v$? Now once we have a local max or a local min, how do we tell which? The answer is actually a bit complicated, and it requires you to inspect the eigenvalues of the Hessian of $f$. We won’t dally on eigenvalues except to explain the idea in brief: for an $n$ variable function $f$ the Hessian of $f$ at $x$ is an $n$-by-$n$ matrix where the $i,j$ entry is the value of $(\partial f / \partial x_i \partial x_j )(x)$. It just so turns out that if this matrix has only positive eigenvalues, then $x$ is a local minimum. If the eigenvalues are all negative, it’s a local max. If some are negative and some are positive, then it’s a saddle point. And if zero is an eigenvalue then we’re screwed and can’t conclude anything without more work. But all of this Hessian business isn’t particularly important for us, because most of our applications of the Lagrangian will work with functions where we already know that there is a unique global maximum or minimum. Finding where the gradient is zero is enough. As much as this author stresses the importance of linear algebra, we simply won’t need to compute any eigenvalues for this one. What we will need to do is look at optimizing functions which are constrained by some equality conditions. This is where Lagrangians come into play. ## Constrained by Equality Often times we will want to find a minimum or maximum of a function $f(x)$, but we will have additional constraints. The simplest kind is an equality constraint. For example, we might want to find the maximum of the function $f(x, y, z) = xyz$ requiring that the point $(x,y,z)$ lies on the unit circle. One could write this in a “canonical form” maximize $xyz$ subject to $x^2 + y^2 + z^2 = 1$ Way back in the scientific revolution, Fermat discovered a technique to solve such problems that was later generalized by Lagrange. The idea is to combine these constraints into one function whose gradient provides enough information to find a maximum. Clearly such information needs to include two things: that the gradient of $xyz$ is zero, and that the constraint is satisfied. First we rewrite the constraints as $g(x,y,z) = x^2 + y^2 + x^2 - 1 = 0$, because when we’re dealing with gradients we want things to be zero. Then we form the Lagrangian of the problem. We’ll give a precise definition in a minute, but it looks like this: $L(x,y,z,\lambda) = xyz + \lambda(x^2 + y^2 + z^2 - 1)$ That is, we’ve added a new variable $\lambda$ and added the two functions together. Let’s see what happens when we take a gradient: $\displaystyle \frac{\partial L}{\partial x} = yz + \lambda 2x$ $\displaystyle \frac{\partial L}{\partial y} = xz + \lambda 2y$ $\displaystyle \frac{\partial L}{\partial z} = xy + \lambda 2z$ $\displaystyle \frac{\partial L}{\partial \lambda} = x^2 + y^2 + z^2 - 1$ Now if we require the gradient to be zero, the last equation is simply the original constraint, and the first three equations say that $\nabla f (x,y,z) = \lambda \nabla g (x,y,z)$. In other words, we’re saying that the two gradients must point in the same direction for the function to provide a maximum. Solving for where these equations vanish gives some trivial solutions (one variable is $\pm 1$ and the rest zero, and $\lambda = 0$), and a solution defined by $x^2 = y^2 = z^2 = 1/3$ which is clearly the maximal of the choices. Indeed, this will work in general, and you can see a geometric and analytic proof in these notes. Specifically, if we have an optimization problem defined by an objective function $f(x)$ to optimize, and a set of $k$ equality constraints $g_i(x) = 0$, then we can form the Lagrangian $\displaystyle L(x, \lambda_1, \dots, \lambda_k) = f(x) + \sum_{i=1}^k \lambda_i g_i(x)$ And then a theorem of Lagrange is that all optimal solutions $x^*$ to the problem satisfy $\nabla L(x^*, \lambda_1, \dots, \lambda_k) = 0$ for some choice of $\lambda _i$. But then you have to go solve the system and figure out which of the solutions gives you your optimum. ## Convexity As it turns out, there are some additional constraints you can add to your problem to guarantee your system has a solution. One nice condition is that $f(x)$ is convexA function is convex if any point on a line segment between two points $(x,f(x))$ and $(y,f(y))$ has a value greater than $f$. In other words, for all $0 \leq t \leq 1$: $\displaystyle f(tx + (1-t)y) \leq tf(x) + (1-t)f(y)$ Some important examples of convex functions: exponentials, quadratics whose leading coefficient is positive, square norms of a vector variable, and linear functions. Convex functions have this nice property that they have a unique local minimum value, and hence it must also be the global minimum. Why is this? Well if you have a local minimum $x$, and any other point $y$, then by virtue of being a local minimum there is some $t$ sufficiently close to 1 so that: $\displaystyle f(x) \leq f(tx + (1-t)y) \leq tf(x) + (1-t)f(y)$ And rearranging we get $\displaystyle (1-t)f(x) \leq (1-t)f(y)$ So $f(x) \leq f(y)$, and since $y$ was arbitrary then $x$ is the global minimum. This alleviates our problem of having to sort through multiple solutions, and in particular it helps us to write programs to solve optimization problems: we know that techniques like gradient descent will never converge to a false local minimum. That’s all for now! The next question we might shadowily ask: what happens if we add inequality constraints?
2014-03-10 12:40:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 487, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8989017009735107, "perplexity": 190.31167481995672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010795590/warc/CC-MAIN-20140305091315-00021-ip-10-183-142-35.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/187222/what-is-the-meaning-of-mathbb-r
# What is the meaning of $\mathbb R^+$? For a function $f$ that maps set $A$ to $B$, • $f\colon\mathbb R^+\to\mathbb R^+$, $f(x) = x^2$ is injective. • $f\colon\mathbb R\to\mathbb R$, $f(x) = x^2$ is not injective since $(- x)^2 = x^2$. what is the difference between $\mathbb R^+$ and $\mathbb R$? Additionally, what is the difference between $\mathbb N$ and $\mathbb N^+$? - $\mathbb R^+$ commonly denotes the set of positive real numbers, that is: $$\mathbb R^+ = \{x\in\mathbb R\mid x>0\}$$ It is also denoted by $\mathbb R^{>0},\mathbb R_+$ and so on. For $\mathbb N$ and $\mathbb N^+$ the difference is similar, however it may be non-existent if you define $0\notin\mathbb N$. In many set theory books $0$ is a natural number, while in analysis it is often not considered a natural number. Your mileage may vary on $\mathbb N$ vs. $\mathbb N^+$. - I see now. I thought it meant that R+ included some extra element. This interpretation makes much more sense. –  James Aug 26 '12 at 20:02 Note that $\mathbb N^+ = \mathbb Z^+$. Also if you want to confuse your readers, you can write the empty set as $\mathbb N^-$, the set of negative natural numbers. :-) –  celtschk Aug 26 '12 at 20:40 NB : Depending on the country, $\mathbb R^+$ is also used for the set of non-negative real numbers. –  Student Aug 26 '12 at 22:15 @Student: Really? $0\in\mathbb R^+$? Sounds bizarre! –  Asaf Karagila Aug 26 '12 at 22:38 @Carl: Read the first line, I also say it defines the positive numbers. I just never thought zero is positive... :-) (Also, these French just have to do everything the other way around...! :-)) –  Asaf Karagila Aug 27 '12 at 1:19 Simply $\mathbb R$ means the set of real numbers. $\mathbb R^+$ means the set of positive real numbers. And $\mathbb R^-$ means the set of negative real numbers. -
2015-10-07 13:09:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9209647178649902, "perplexity": 542.7338000627012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737864605.51/warc/CC-MAIN-20151001221744-00094-ip-10-137-6-227.ec2.internal.warc.gz"}
https://etheses.bham.ac.uk/id/eprint/1071/
# Thermal biology and establishment potential of two non-native candidate biological control agents, Nesidiocoris tenuis Reuter (Hemiptera: Miridae) and Lysiphlebus testaceipes (Cresson) (Hymenoptera: Braconidae, Aphidiinae), in the U.K. Hughes, Gwennan Elen (2010). Thermal biology and establishment potential of two non-native candidate biological control agents, Nesidiocoris tenuis Reuter (Hemiptera: Miridae) and Lysiphlebus testaceipes (Cresson) (Hymenoptera: Braconidae, Aphidiinae), in the U.K. University of Birmingham. Ph.D. Preview Hughesgwennan_10_PhD.pdf PDF ## Abstract $$Nesidiocoris$$ $$tenuis$$ Reuter (Hemiptera: Miridae) and $$Lysiphlebus$$ $$testaceipes$$ (Cresson) (Hymenoptera: Braconidae, Aphidiinae) are candidate biological control agents known to play an important role in the management of agricultural and horticultural pests in southern Europe. Through a series of laboratory and field assessments, this study investigates the establishment potential of these two species in cool temperate climates typical of northern Europe. Laboratory results demonstrated a low level of cold tolerance in $$N.$$ $$tenuis$$ with a developmental threshold of 12.9°C and no indication of ability to diapause. Field trials supported these findings with 100% mortality occurring after less than 4 weeks of winter field exposure. Collectively, these data suggest that $$N.$$ $$tenuis$$ is unlikely to establish outdoors in northern Europe and would therefore have little or no non-target effects on native species in such regions, thereby constituting a ‘safe’ candidate for release. Additionally, investigations into temperature-related thresholds indicated that $$N.$$$$tenuis$$ would be an effective control agent against species with a similar activity profile to the two-spotted spider mite $$Tetranychus$$ $$urticae$$ Koch (Acari: Tetranychidae). $$Lysiphlebus$$ $$testaceipes$$demonstrated a greater ability to tolerate cold than $$N.$$ $$tenuis$$ but there was no indication of ability to diapause. With a developmental threshold of 5.8°C, parasitoid larvae and pupae continued to develop during the 70 d of winter field trials yielding reproductively viable adults. With this level of cold tolerance and a host range in excess of 100 aphid species, including some known to overwinter in the UK and other temperate regions, it seems reasonable to predict that $$L.$$ $$testaceipes$$ would be able to establish in northern Europe. Thermal activity threshold investigations also indicated that $$L.$$ $$testaceipes$$ would constitute an effective control agent for pest species with similar activity profiles to $$Aphis$$ $$fabae$$ Scop. (Hemiptera: Aphididae) under a range of climatic conditions. These data are discussed in relation to current debate on the environmental risk assessment and regulatory system in Europe for the release of non-native biological control agents. Type of Work: Thesis (Doctorates > Ph.D.) Award Type: Doctorates > Ph.D. Supervisor(s): Supervisor(s)EmailORCID Bale, Jeffrey SUNSPECIFIEDUNSPECIFIED Licence: College/Faculty: Colleges (2008 onwards) > College of Life & Environmental Sciences School or Department: School of Biosciences Funders: None/not applicable Subjects: Q Science > QR Microbiology URI: http://etheses.bham.ac.uk/id/eprint/1071 ### Actions Request a Correction View Item
2022-08-19 07:35:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3530438244342804, "perplexity": 10709.957347862603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573630.12/warc/CC-MAIN-20220819070211-20220819100211-00771.warc.gz"}
https://topospaces.subwiki.org/wiki/Tame_submanifold
# Tame submanifold Let $M$ be a manifold of dimension $m$ and $N$ a submanifold of dimension $n$. Then $N$ is termed tame in $M$ if for every point $x \in N$, there exists a neighbourhood $U$ of $x$ in $M$ such that the pair $(U, U \cap N)$ is homeomorphic to the pair $(\R^m,\R^n)$ where $\R^n$ is viewed as a linear subspace of $\R^m$. An example of a submanifold which is not tame is the Alexander horned sphere in $\R^3$.
2022-05-26 03:57:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8909761309623718, "perplexity": 32.74544875589103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662601401.72/warc/CC-MAIN-20220526035036-20220526065036-00437.warc.gz"}
https://merenlab.org/2020/07/22/interacdome/
Microbial 'omics Brought to you by # Estimating per-residue binding frequencies with InteracDome ### a post by Evan Kiefl This feature is implemented in anvi’o v7 and later. You can identify which version you have on your computer by typing anvi-self-test --version in your terminal. The latest version of anvi’o is v7. See the release notes. ## Who is this post for? NOTE: For a more practical tutorial on the same topic, please visit the infant gut tutorial This post is like a technical diary for my implementation of InteracDome into anvi’o. The primary purpose of this is to provide all the technical details in one place, so that (1) people who inevitably use this in anvi’o know what is going on under the hood, and (2) so people digging into the codebase for debugging or extending features understand why I made certain decisons. It is thus a very technical blog post, and in that sense is quite distinct from other stuff we write about. But if you think you may be interested, please keep reading. I will not pretend that this has a start middle and end–it is more a reference for all the decisions I made. Just keep in mind that I will not be shying away from sharing code snippets from the codebase, so if you’re code-shy, this is your trigger warning. This post is not version-controlled. The codebase is dynamic and will inevitably change, but this post will (probably) not, and is therefore merely a snapshot of what once was (July 20th, 2020). I include it in the hopes it provides conceptual clarity, and hope it is not taken too seriously, or an accurate representation of anvi’o’s codebase. ## Introduction Far too often we do not know what our SNV, SAAV, and SCV data mean. The problem is that our data output is nucleotide sequence, which defines the blueprint for function, but is a very abstracted output from what we are trying to learn about. As eloquently put by Harms and Thornton, as a practical convenience, we as evolutionary biologists have a tendency to “treat molecular sequences as mere strings of letters, the patterns of which carry the traces of historical processes, rather than as functioning objects for which the physical properties determine their behaviour”. It is from these patterns that we can learn so much by analyzing variant data derived from metagenomes. However, given that the SNV/SAAV/SCV (I’ll just call them variants from now on) patterns by themselves offer zero insight into fitness, within our field, variant data is most commonly used to identify and track “strains”. And to be honest, that saddens me because there is so much more potential than using them as ecological markers. To bridge a gap between metagenomic sequence variants and the structural biology of gene products that underpin function and fitness is no easy task, but I think its critical if we want empower our findings with biochemical information. (As a small step in this direction, Özcan, Meren, and I developed anvi’o structure, a way to visualize metagenomic sequence variants directly on predicted protein structures). Related to this concept, one of the most transformative talks I have ever attended was given by Dr. Mona Singh about this paper (Kobren and Singh, 2018). Kobren and Singh took every Pfam family and searched for whether or not any members of the family had crystalized structures that co-complexed with bound ligands. If so, then they calculated 3D distance scores of each residue to the ligand(s) and used these distances as an inverse proxy for binding likelihood (the more likely you are to be involved in the binding of a ligand, the closer you are in physical space to the ligand). They aggregated the totality of these results and ended up with thousands of Pfam IDs for which they could attribute per-residue ligand-binding scores to the Pfam’s associated hidden Markov model (HMM) profile. They used this to extend our knowledge of the human proteome, however when I was listening to the talk all I was thinking about was applying it to metagenomics. They entitled their software, InteracDome, and there is an online server where you can give an amino acid sequence, and their server will run an HMM of your sequence against the InteracDome database and attribute estimated binding scores for each residue in your sequence. That is basically exactly what I wanted to do, except I wanted to extend this to the potentially massive number of genes that can typically found in anvi’o contigs databases. Furthermore, I wanted to store all the information in the contigs database for further data analysis and visualization, as is the anvi’o way. ## Breaking it down into (6) components Here are the bird’s-eye-view components of InteracDome’s implementation into anvi’o: 1. Storing a local copy of InteracDome’s tab-separated files 2. Storing the HMM profiles that the contig database’s genes are searched against 3. Running the HMM 4. Parsing HMMER’s standard output file 5. Filtering HMM hits 6. Matching binding frequencies to the gene’s residues 7. Storing the per-residue binding frequencies into the contigs database ## (1) and (2): Storing copies of the InteracDome datasets and the corresponding HMM profiles ### Description of tab-separated InteracDome files What are these tab-separated files, you say? Well they contain the binding frequencies associated to each match-state (read as: residue) of the HMM. Here is what one looks like (be sure to scroll right to get a complete picture of the table): pfam_id domain_length ligand_type num_nonidentical_instances num_structures binding_frequencies PF00001_7tm_1 268 1WV 3 4 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.460539215686,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.539460784314,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.360294117647,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 PF00001_7tm_1 268 2CV 7 10 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.103823009901,0,0,0.103823009901,0,0,0,0,0,0.207401960435,0,0,0,0.155734514852,0,0,0.577037641019,0,0,0.572218731277,0.0519115049506,0,0.310492792233,0.473458690485,0,0,0,0,0,0.155490455484,0,0,0,0,0,0.109873556499,0,0,0.051667445583,0,0.0519115049506,0,0,0,0.574814741526,0.371783032133,0,0,0,0,0,0,0,0,0,0,0,0,0,0.582709105685,0.681678668828,0.112286030157,0,0,0,0,0,0,0,0.0517419453701,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.0519115049506,0.207646019802,0,0,0,0,0,0,0.0519115049506,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.0519115049506,0,0.103823009901,0.155734514852,0,0,0.0519115049506,0.0519115049506,0,0,0,0,0,0,0,0,0,0,0.0519115049506,0,0.051667445583,0,0,0,0.0544330154125,0,0,0,0,0,0.051667445583,0,0,0,0,0,0,0 PF00001_7tm_1 268 ERM 6 5 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1.0,0.123851117406,0,0,1.0,1.0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.123851117406,0.258781869688,0.134930752282,0.125645262827,0.125645262827,0,0,0.490352533837,0.296797293044,0,0.125220333648,0.134930752282,0,0,0,0,0,0,0.134930752282,0,0.31920050362,0,0,0.296797293044,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.296797293044,0,0,0.490352533837,0,0,0,0.490352533837,0.123851117406,0.249071451054,0.318775574441,0.193555240793,0,0.125220333648,0,0.249496380233,0,0,0,0.125645262827,0,0,0,0,0.431728045326,0.296797293044,0,0.123851117406,0.422017626692,0,0,0.249071451054,0,0,0.123851117406,0.125220333648,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 PF00003_7tm_3 238 SM_ 3 4 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.248912097476,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.248912097476,0,0,1.0,1.0,0,0,0.498259355962,0.751087902524,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.248912097476,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.249347258486,0.249347258486,0.497824194952,0,0.249347258486,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.249347258486,0,0.497824194952,0,0.249347258486 PF00003_7tm_3 238 ALL_ 3 4 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.248912097476,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.248912097476,0,0,1.0,1.0,0,0,0.498259355962,0.751087902524,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.248912097476,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.249347258486,0.249347258486,0.497824194952,0,0.249347258486,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.249347258486,0,0.497824194952,0,0.249347258486 PF00004_AAA 132 SM_ 205 166 0.0114604627231,0.00477140578191,0.00159046859397,0.00573253589269,0.00636187437588,0.0156074891605,0.151915630049,0.00636187437588,0.165530516241,0.00465063703568,0.7665876828,0.782039254129,0.638852101367,0.0148199070736,0.0197556615281,0.0659386849446,0.0233671491129,0.0274992668572,0.0068758651402,0.0243892439834,0.016548822645,0.0175596896304,0.0235691023598,0.0148281476804,0.0108389692647,0.0200299870214,0.0158208194559,0.136698750907,0.0183922951312,0.0108666290365,0.00319897110896,0.00352559062884,0.00340017637127,0.00310366737855,0.00743451388943,0,0,0.00226506315454,0.00226506315454,0.00272300017124,0,0.00159046859397,0.00435666305088,0.00833301017716,0.00674254158319,0.00318093718794,0.00871332610176,0.00871332610176,0.00594713164485,0.00636187437588,0.00477140578191,0.00636187437588,0.00159046859397,0.00159046859397,0,0,0,0,0.0166834633166,0.0124614217996,0.0123052272426,0.00332776801496,0,0,0.0235604250871,0.0205062852754,0.00159046859397,0.00159046859397,0.00443759986205,0.0108022602052,0,0.00758910697789,0.042872742652,0,0,0,0.0205374756636,0,0,0,0.00346838508927,0.00216787556951,0.00390892326059,0.00781784652118,0.00604666604827,0.00768955958345,0.0109017146314,0.012587618788,0.00954249832977,0.0143484218372,0.00116949287805,0.0021871566269,0,0.0243284702568,0.00283095932639,0.00102104179321,0.00897890971083,0.00204208358642,0.0122165343967,0,0,0,0,0.00332776801496,0.00649934147293,0.014056198686,0.00317157345797,0,0.00410153185904,0.0647003263302,0.00826993895804,0.00378063632286,0,0.0446638329233,0.000961349923678,0.00186720282586,0.00186260580663,0.0444138756133,0.00340073494863,0.00628002363626,0.0196825372299,0.00198595471452,0.0189300031233,0.0107861573734,0.00416646190614,0.00314850655058,0.0220221369003,0.00639178475723,0.0043743132538,0.00886361588898,0.0110906488313,0.0021871566269 PF00004_AAA 132 DRUGLIKE_ 205 166 0.0114604627231,0.00477140578191,0.00159046859397,0.00573253589269,0.00636187437588,0.0156074891605,0.151915630049,0.00636187437588,0.165530516241,0.00465063703568,0.7665876828,0.782039254129,0.638852101367,0.0148199070736,0.0197556615281,0.0659386849446,0.0233671491129,0.0274992668572,0.0068758651402,0.0243892439834,0.016548822645,0.0165092215714,0.0235691023598,0.0137776796214,0.0108389692647,0.0200299870214,0.0158208194559,0.136698750907,0.0183922951312,0.0108666290365,0.00319897110896,0.00352559062884,0.00340017637127,0.00310366737855,0.00743451388943,0,0,0.00226506315454,0.00226506315454,0.00272300017124,0,0.00159046859397,0.00435666305088,0.00833301017716,0.00674254158319,0.00318093718794,0.00871332610176,0.00871332610176,0.00594713164485,0.00636187437588,0.00477140578191,0.00636187437588,0.00159046859397,0.00159046859397,0,0,0,0,0.0156329952576,0.0124614217996,0.0123052272426,0.00332776801496,0,0,0.0235604250871,0.0205062852754,0.00159046859397,0.00159046859397,0.00443759986205,0.0108022602052,0,0.00758910697789,0.042872742652,0,0,0,0.0205374756636,0,0,0,0.00346838508927,0.00216787556951,0.00390892326059,0.00781784652118,0.00604666604827,0.00768955958345,0.0109017146314,0.012587618788,0.00954249832977,0.0143484218372,0.00116949287805,0.0021871566269,0,0.0243284702568,0.00283095932639,0.00102104179321,0.00897890971083,0.00204208358642,0.0111660663377,0,0,0,0,0.00332776801496,0.00649934147293,0.014056198686,0.00317157345797,0,0.00410153185904,0.0647003263302,0.00826993895804,0.00378063632286,0,0.0446638329233,0.000961349923678,0.00186720282586,0.00186260580663,0.0444138756133,0.00340073494863,0.00628002363626,0.0196825372299,0.00198595471452,0.0189300031233,0.0107861573734,0.00416646190614,0.00314850655058,0.0220221369003,0.00639178475723,0.0043743132538,0.00886361588898,0.0110906488313,0.0021871566269 PF00004_AAA 132 ANP 12 16 0,0,0,0,0,0,0.293054223957,0,0.220890723765,0,0.902585657643,0.902585657643,0.902585657643,0,0.0365564138796,0.149544866261,0.0365564138796,0,0,0,0,0,0,0,0,0,0,0.11424794602,0,0,0,0,0.0359910406715,0.0608579284774,0,0,0,0,0,0,0,0,0,0.0608579284774,0.0608579284774,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.0419485401186,0,0,0,0,0,0,0,0.14472176965,0,0,0,0.18127818353,0,0,0,0.0365564138796,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.11424794602,0,0.11424794602,0,0,0,0,0,0,0,0,0,0,0.129010569799,0,0,0,0.112988452381,0,0,0,0.112988452381,0,0,0,0,0,0.0419485401186,0,0,0,0,0,0,0,0 PF00004_AAA 132 PEPTIDE_ 6 9 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.12131242741,0.653927119629,0.188346883469,0,0,0,0,0,0,0,0,0,0,0,0,0,0.251899438637,0.0941734417344,0.0941734417344,0,0,0.251899438637,0.251899438637,0,0,0,0,0,0,0,0,0,0,0,0,0,0.188346883469,0,0,0,0,0.12131242741,0,0.24262485482,0.162299167635,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 PF00005_ABC_tran 137 SM_ 110 141 0.0377234744273,0.00461980790097,0.0096909780396,0.0157905225569,0.0169710021629,0.0275121349589,0.0156602458352,0.00405947701988,0.00739122457456,0.00768805945304,0,0.00161828107474,0.00591327371089,0.00202973850994,0.00579099946437,0.00416338695375,0.00323133939386,0,0.00548167659646,0.303901345366,0,0.280623915296,0.0067656772453,0.679542204351,0.697685738027,0.691214242456,0.00839368055968,0.00482361342933,0.0397878560196,0.0227702896963,0.00909516751323,0.0349315331438,0.0131520331553,0.0193392901203,0.0284420664734,0.0130227255943,0.012984391476,0.0106552933277,0.0204835408505,0.0189168490431,0.0285336657781,0.0275702117864,0,0.00457484438895,0,0,0,0.0238587624559,0.021829023946,0.00254510587901,0.019283918067,0,0.0217485765752,0.0237783150851,0.0213136565769,0.0276168909756,0.0118202358496,0.0114312928629,0.012052883022,0.0100482164063,0.0063542198101,0.00944680557652,0,0.00174289125479,0.0278798911495,0.097924307303,0.00184563007097,0.00635965898894,0.0322596343334,0.0297949758252,0.0490094392875,0.00790848545895,0.0292221420359,0.0249021779027,0,0.0234686984736,0.00517652112411,0.00215504189665,0,0,0.00764743755861,0.00471086684116,0.00909061002502,0.0225204772897,0.00473593873536,0,0.00399443207795,0.00197991580508,0,0,0.00270047617867,0.00251906964526,0.00289475535167,0.00277098703935,0.00270047617867,0.111023129603,0.0217327928995,0.019026506502,0.00270047617867,0.014177092878,0.0679676350364,0.0137430965532,0,0.190965797116,0,0.0585827481125,0.0127265974899,0.0652120185593,0.00794644519468,0.102944250852,0.0137201014491,0.0138465570152,0.0945003452098,0,0.0232318158738,0.00900928101363,0.00996736376179,0,0.0169296493927,0.018744119295,0.0136624119088,0.00214880200698,0.0166390770754,0.0058505131038,0.0142845969412,0.0157676346769,0.0166971239124,0.00307952606445,0,0,0.00264422779836,0.0129966476375,0.0352791479778,0.153867299413,0.00694183181232,0.00489649026725,0.0690208616258 PF00005_ABC_tran 137 METABOLITE_ 93 115 0.0423846282715,0.00244789601093,0.00981946362115,0.017596034204,0.0155837013976,0.0288244918514,0.0149514607241,0.00489579202186,0.00867883255515,0.00935790985627,0,0.00195760342646,0.00193456357756,0.00244789601093,0.00692361173264,0.00509590025931,0.00386912715512,0,0.00645575857075,0.248906074717,0,0.288270775052,0.00801153357221,0.663026833878,0.683136475548,0.681364675625,0.0102468162603,0.00641783224383,0.0346052146251,0.0257845741747,0.0109370965404,0.0417062809958,0.0158098487134,0.0232187100008,0.0311952501979,0.0155601075809,0.0136836831039,0.0109042741005,0.0224747866107,0.0219811610125,0.0265527236302,0.0273664044944,0,0.00558619284378,0,0,0,0.025763062536,0.023315166525,0.00313829683285,0.0201768696922,0,0.0201768696922,0.0226247657031,0.0226247657031,0.0290292426008,0.0110260155152,0.0135234901915,0.0144110449998,0.0119763190905,0.00752124098774,0.0111141050209,0,0.00207540533685,0.0302486042934,0.0524527668327,0.00227251781357,0.00765900406687,0.0354870010588,0.0354870010588,0.0526979917992,0.00665072854151,0.0292754942446,0.0269726888535,0,0.0252313733622,0.00629239146422,0.00260660765911,0,0,0.00886792919754,0.00555046745964,0.010788864225,0.0269824218852,0.00556363756128,0,0.00488547577917,0.00242060730148,0,0,0.00352968911807,0.00386606577791,0.00326618240823,0.00330305433987,0.00352968911807,0.115233025287,0.0130981412706,0.00653821951543,0.00352968911807,0.010863080338,0.0663599175597,0.0052259159312,0,0.184754657886,0,0.0599124302967,0.0162860401874,0.0573467317214,0.00555046745964,0.0728912112544,0.0135766150981,0.00528373197268,0.0739797676168,0,0.0166957068247,0.00427115563956,0.0118994288571,0,0.0185147541652,0.0118994288571,0.0147191112135,0.0026129579656,0.0179293823307,0.00330305433987,0.0154092075878,0.016929152311,0.0142435985256,0,0,0,0.00317755916369,0.0136260979711,0.0335947867361,0.121778722136,0.00840347509489,0.00584833317077,0.0677237376771 PF00005_ABC_tran 137 ANP 17 19 0,0,0.0235131301701,0,0,0.0235131301701,0.0470262603402,0,0,0,0,0,0,0,0,0,0,0,0,0.841006106374,0,0.271660563153,0,0.676289118674,0.676289118674,0.632069683936,0,0,0.0823856755492,0,0,0,0,0,0.031959449247,0,0.0235131301701,0.0235131301701,0.0235131301701,0.0470262603402,0.0940525206804,0.0705393905103,0,0,0,0,0,0,0,0,0,0,0,0,0,0.0470262603402,0,0,0,0,0,0,0,0,0,0.430099299531,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.102133756407,0.102133756407,0,0,0.062410917336,0.0597398340485,0,0.125591324241,0,0,0,0.117715952724,0.0255334391017,0.360214252687,0.0347094361003,0.0597398340485,0.303224405492,0,0.0597398340485,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.0442194347374,0.526216121352,0,0,0.229658360646 PF00006_ATP-synt_ab 213 SM_ 48 83 0,0,0,0.00418690801436,0,0,0.0203667663316,0,0.00386900178183,0,0,0.00386900178183,0,0,0,0.00386900178183,0,0,0,0,0,0,0.0988488771734,0.175445150161,0.0695603302947,0.00173400036405,0.57566732122,0.612799884263,0.638785090543,0.0886847595391,0.0538437941832,0.030641262013,0.0880153017934,0.0615367639825,0.0867187752413,0.0612003445172,0.0836536701719,0.0573313427354,0.0417983345638,0.0133194461894,0.0230254503209,0.019351851181,0.00386900178183,0.0339285033528,0.0242224992213,0.0121112496106,0,0,0,0,0.0739242998399,0.146139675162,0,0.00522074083795,0.0299399626262,0,0.0247466468246,0.0331204628534,0,0.00969600582193,0.00969600582193,0.0022941917051,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.00386900178183,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.0429700305796,0.0325605349484,0.0325605349484,0,0,0,0.0247466468246,0.0247466468246,0,0,0.0163728307959,0,0.00297268548145,0,0,0,0,0,0,0,0,0,0.00787881465062,0.00393940732531,0,0,0,0,0,0,0,0,0,0,0,0.00297268548145,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.0459784485426,0,0,0.00173400036405,0,0,0,0,0.0125607240431,0.017819954343,0,0.0125607240431,0.0121859227816,0,0,0,0.0626004689813,0,0,0.00173400036405,0.0043609421676,0,0,0,0,0.0127797902401,0,0,0.0389426376458,0.199928757533,0.0125562590955,0.0022941917051,0.0117102485569,0.0177066053529,0.475712856261,0.00868954844585,0.014584989095,0.025337533588,0.0970702786178,0.0297709240626,0.045010602041,0.0189803777073,0.0471133143059,0.114051762747,0.0518791397563 You can see that the binding_frequencies column contains per residue scores from 0 to 1 that say how likely each residue is to be involved in binding. Each value is comma separated and there are as many values as there are match states in the HMM. That means the number of values in binding_frequencies matches the value of domain_length. Also notice that the ligand_type varies and you can have multiple ligands for a given Pfam. I hope its appreciated that these files give the essential output of the InteracDome workflow, and there are two such files. The first is the non-redundant representable set, which “correspond to domain-ligand interactions that had nonredundant instances across three or more distinct PDB structures. [Kobren and Singh] recommend using this collection to learn more about domain binding properties”. If I recall, there are 2375 such Pfam IDs in this set (this is compared to the roughly 15,000-20,000 total Pfam IDs). The second set is the confident set, which “correspond to domain-ligand interactions that had nonredundant instances across three or more distinct PDB entries and achieved a cross-validated precision of at least 0.5. [Kobren and Singh] recommend using this collection to annotate potential ligand-binding positions in protein sequences”. ### Description of the HMM profiles Since each entry in the InteracDome dataset corresponds to a Pfam, we just need the Pfam database, and that can be readily downloaded from the EBI ftp server, or by running anvi-setup-pfams. But since all the Pfams are not present in the InteracDome dataset, we merely need a subset of the Pfam database that contains Pfams with at least one entry from the non-redundant representable InteracDome set. From now on I’ll just call this subset the InteracDome Pfams (IPfams). Also, it needs to be said that InteracDome was carried out using Pfam 31.0, so it is critical that is what we use too. ### anvi-setup-interacdome Acquiring the InteracDome’s tab-separated files and the IPfam HMM profiles are just one-time setups, and therefore I created a program called anvi-setup-interacdome to handle these tasks. So to make these inputs available, the user must run anvi-setup-interacdome which initiates a class called InteracDomeSetup. Here is the class: This is merely a snapshot of the class as of July 12th, 2020. class InteracDomeSetup(object): def __init__(self, args, run=terminal.Run(), progress=terminal.Progress()): """Setup a Pfam database for anvi'o Parameters ========== args : argparse.Namespace See bin/anvi-setup-interacdome for available arguments """ self.run = run self.progress = progress self.interacdome_data_dir = args.interacdome_data_dir self.pfam_setup = None self.interacdome_files = { } if self.interacdome_data_dir and args.reset: raise ConfigError("You are attempting to run InteracDome setup on a non-default data directory (%s) using the --reset flag. " "To avoid automatically deleting a directory that may be important to you, anvi'o refuses to reset " "directories that have been specified with --interacdome-data-dir. If you really want to get rid of this " "directory and regenerate it with InteracDome data inside, then please remove the directory yourself using " "a command like rm -r %s. We are sorry to make you go through this extra trouble, but it really is " "the safest way to handle things." % (self.interacdome_data_dir, self.interacdome_data_dir)) if not self.interacdome_data_dir: self.interacdome_data_dir = constants.default_interacdome_data_path self.run.info('Data directory', self.interacdome_data_dir) self.run.info('Reset contents', args.reset) filesnpaths.is_output_dir_writable(os.path.dirname(os.path.abspath(self.interacdome_data_dir))) if not args.reset and not anvio.DEBUG: self.is_database_exists() filesnpaths.gen_output_directory(self.interacdome_data_dir, delete_if_exists=args.reset) def is_database_exists(self): """Raise ConfigError if database exists Currently, this primitively decides if the database exists by looking at whether Pfam-A.hmm or Pfam-A.hmm.gz exists. """ if (os.path.exists(os.path.join(self.interacdome_data_dir, 'Pfam-A.hmm') or os.path.exists(os.path.join(self.interacdome_data_dir, 'Pfam-A.hmm.gz')))): These datasets can be found at the interacdome webpage: https://interacdome.princeton.edu/ """ for path, url in self.interacdome_files.items(): url, os.path.join(self.interacdome_data_dir, path), check_certificate=False, progress=self.progress, run=self.run ) """Setup the pfam data subset used by interacdome Currently, interacdome only works for pfam version 31.0, so that is the version downloaded here. """ pfam_args = argparse.Namespace( pfam_data_dir=self.interacdome_data_dir, pfam_version='31.0', reset=False, ) self.pfam_setup = pfam.PfamSetup(pfam_args) self.pfam_setup.get_remote_version() """Loads the representable interacdome dataset as pandas df""" data = InteracDomeTableData(kind=kind, interacdome_data_dir=self.interacdome_data_dir) return data.get_as_dataframe() def get_interacdome_pfam_accessions(self): """Get the representable interacdome Pfam accessions""" def filter_pfam(self): """Filter Pfam data according to whether the ACC is in the InteracDome dataset""" interacdome_pfam_accessions = self.get_interacdome_pfam_accessions() hmm_profiles = pfam.HMMProfile(os.path.join(self.interacdome_data_dir, 'Pfam-A.hmm')) hmm_profiles.filter(by='ACC', subset=interacdome_pfam_accessions) hmm_profiles.write(filepath=None) # overwrites # hmmpresses the new .hmm self.pfam_setup.hmmpress_files() # We also filter out the Pfam-A.clans.tsv file, since it is used as a catalog clans_file = os.path.join(self.interacdome_data_dir, 'Pfam-A.clans.tsv') def setup(self): """The main method of this class. Sets up the InteracDome data directory for usage""" self.run.warning('', header='Filtering Pfam HMM profiles', lc='yellow') self.filter_pfam() First, anvi’o creates a directory under anvio/data/misc/InteracDome (by default) which will end up housing the IPfam HMM profiles and the InteracDome datasets. Then the main method, setup is called, which is directly above us. We can see that first, anvi’o downloads the tab-separated files. Since these files were created for Pfam version 31.0, anvi’o next downloads a copy of the Pfam 31.0 HMMs. There already exists a framework to download Pfams because of anvi-setup-pfams, so this was very little work to do. Yay for code resusability. As a final step, the Pfam HMMs are subset to the IPfam HMMs, i.e. only the Pfams that are in the InteracDome dataset. ## (3) Running the hidden Markov model (HMM) HMMs are by no means an elementary topic, and so rather than butcher an explanation with my limited understanding, I defer to this wonderful paper. With both the required inputs being gathered and setup with anvi-setup-interacdome, the next thing I focused on was actually running an HMM on user genes. This as well as the remainder of steps will be carried out by the program anvi-run-interacdome. The program that goes hand-in-hand with Pfam HMM profiles is HMMER, and in fact, anvi’o already has a driver to run HMMER on a set of user genes which was written for anvi-run-pfams. Further in fact(?), anvi-run-pfams in some ways does exactly what I want: it takes genes from the contigs database, runs an HMM of the user genes against the Pfam HMM profiles, and annotates genes with the best Pfam hit. The primary difference between anvi-run-interacdome and anvi-run-pfams, is that rather than simply attribute to each gene the Pfam ID of the best hit, we instead must keep track of residue-level information associated to each hit, so that we can associate the binding frequencies from the InteracDome dataset. In this sense, things are much more complicated, as I will need to write an in-depth parser of HMMER’s output to keep track of these things. Currently, anvi-run-pfams can be run with either of the HMMER programs, hmmscan or hmmsearch. However, for good reason, hmmsearch in practice runs much faster for querying user genes against a HMM profile database with a size comparable to the Pfam database. Since hmmsearch is faster for our use-case and because I wanted to avoid creating 2 output parsers (one for hmmscan and one for hmmsearch), anvi-run-interacdome is by design only compatible with hmmsearch. All in all, anvi-run-interacdome runs hmmsearch almost exactly like anvi-run-pfams does. In fact, the responsible class for anvi-run-interacdome, anvio.interacdome.InteracDomeSuper, inherits anvio.pfam.Pfam to manage a lot of the boiler-plate code. I chose to apply the HMMER filter parameter --cut_ga that uses the Pfam GA (gathering threshold) score cutoffs to pre-filter the output of hmmsearch. This cuts out a lot of crap to sift through. ## (4) Parsing HMMER’s standard output file hmmsearch has 2 outputs. One is a tabular output like this: # --- full sequence --- -------------- this domain ------------- hmm coord ali coord env coord # target name accession tlen query name accession qlen E-value score bias # of c-Evalue i-Evalue score bias from to from to from to acc description of target #------------------- ---------- ----- -------------------- ---------- ----- --------- ------ ----- --- --- --------- --------- ------ ----- ----- ----- ----- ----- ----- ----- ---- --------------------- 3609 - 333 2-Hacid_dh PF00389.29 134 8.8e-20 68.1 0.1 1 1 8.7e-23 1.2e-19 67.7 0.1 9 133 20 325 8 326 0.95 - 5374 - 320 2-Hacid_dh PF00389.29 134 2e-10 37.8 0.0 1 1 2.6e-13 3.6e-10 37.0 0.0 21 103 24 195 5 310 0.71 - 3609 - 333 2-Hacid_dh_C PF02826.18 178 1e-55 185.2 0.0 1 1 1.6e-58 1.5e-55 184.7 0.0 2 177 118 293 117 294 0.98 - 5374 - 320 2-Hacid_dh_C PF02826.18 178 1.7e-53 178.0 0.1 1 1 3.2e-56 2.9e-53 177.2 0.1 2 178 108 282 107 282 0.96 - 3608 - 311 2-Hacid_dh_C PF02826.18 178 2.4e-07 27.7 0.0 1 1 4.4e-10 4e-07 26.9 0.0 36 144 12 123 5 130 0.85 - 2301 - 162 2Fe-2S_thioredx PF01257.18 145 3.8e-37 124.7 0.2 1 1 1.6e-40 4.3e-37 124.5 0.2 4 144 12 154 9 155 0.95 - 2121 - 441 2HCT PF03390.14 416 8.5e-157 519.4 35.0 1 1 3.6e-160 9.9e-157 519.2 35.0 1 415 24 434 24 435 0.99 - 1998 - 329 3Beta_HSD PF01073.18 282 3.1e-22 76.2 0.0 1 1 1.1e-23 3.9e-21 72.6 0.0 1 237 4 243 4 254 0.74 - 504 - 316 3Beta_HSD PF01073.18 282 7.7e-18 61.8 0.1 1 1 3.4e-20 1.2e-17 61.2 0.1 1 229 5 232 5 242 0.78 - [...] The format is tabularized and thus easy enough to parse, but unfortunately the information is not detailed enough since it contains no information about residue-by-residue alignment of the match states (residues in the HMM profile) to the user gene sequence. The second, more verbose output, does: # hmmsearch :: search profile(s) against a sequence database # HMMER 3.2.1 (June 2018); http://hmmer.org/ # Copyright (C) 2018 Howard Hughes Medical Institute. # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # query HMM file: /Users/evan/Software/anvio/anvio/data/misc/InteracDome/Pfam-A.hmm # target sequence database: /var/folders/58/mpjnklbs5ql_y2rsgn0cwwnh0000gn/T/tmpyebp3gbo/AA_gene_sequences.fa.0 # output directed to file: /var/folders/58/mpjnklbs5ql_y2rsgn0cwwnh0000gn/T/tmpyebp3gbo/AA_gene_sequences.fa.0_output # per-dom hits tabular output: /var/folders/58/mpjnklbs5ql_y2rsgn0cwwnh0000gn/T/tmpyebp3gbo/AA_gene_sequences.fa.0_table # model-specific thresholding: GA cutoffs # number of worker threads: 1 # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Query: 14-3-3 [M=222] Accession: PF00244.19 Description: 14-3-3 protein Scores for complete sequences (score includes all domains): --- full sequence --- --- best 1 domain --- -#dom- E-value score bias E-value score bias exp N Sequence Description ------- ------ ----- ------- ------ ----- ---- -- -------- ----------- [No hits detected that satisfy reporting thresholds] Domain annotation for each sequence (and alignments): [No targets detected that satisfy reporting thresholds] Internal pipeline statistics summary: ------------------------------------- Query model(s): 1 (222 nodes) Target sequences: 2754 (847468 residues searched) Passed MSV filter: 207 (0.0751634); expected 55.1 (0.02) Passed bias filter: 80 (0.0290487); expected 55.1 (0.02) Passed Vit filter: 6 (0.00217865); expected 2.8 (0.001) Passed Fwd filter: 0 (0); expected 0.0 (1e-05) Initial search space (Z): 2754 [actual number of targets] Domain search space (domZ): 0 [number of targets reported over threshold] # CPU time: 0.01u 0.00s 00:00:00.01 Elapsed: 00:00:00.01 # Mc/sec: 14884.77 // Query: 2-Hacid_dh [M=134] Accession: PF00389.29 Description: D-isomer specific 2-hydroxyacid dehydrogenase, catalytic domain Scores for complete sequences (score includes all domains): --- full sequence --- --- best 1 domain --- -#dom- E-value score bias E-value score bias exp N Sequence Description ------- ------ ----- ------- ------ ----- ---- -- -------- ----------- 8.8e-20 68.1 0.1 1.2e-19 67.7 0.1 1.1 1 3609 2e-10 37.8 0.0 3.6e-10 37.0 0.0 1.5 1 5374 Domain annotation for each sequence (and alignments): >> 3609 # score bias c-Evalue i-Evalue hmmfrom hmm to alifrom ali to envfrom env to acc --- ------ ----- --------- --------- ------- ------- ------- ------- ------- ------- ---- 1 ! 67.7 0.1 8.7e-23 1.2e-19 9 133 .. 20 325 .. 8 326 .. 0.95 Alignments for each domain: == domain 1 score: 67.7 bits; conditional E-value: 8.7e-23 HHHHHHHHH.TTE.EEEEESCSSHHHCHHHHTTESEEEE-TTS-BSHHHHHHHTT--EEEESSSSCTTB-HHHHHHTT-EEEE-TT.TTHHHHHHHH.. CS xxxxxxxxx.xxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.xxxxxxxxxx.. RF +e l +l++ ++v +++v +e+ +el+e ++++ ++i ++ +t+e++e+ ++L +i+r+g+G++n+Dldaak++ +V+ +p+ + ++vAE 3609 20 PEHLTRLEKiGTVkHFTVDSEIGGKELAECLQGYTIIIASVTPFFTKEFFEHKDELLLISRHGIGYNNIDLDAAKQHDTIVSIIPAlVERDAVAENNvt 118 667777888655556777779*****************************************************************777889***999* PP ................................................................................................... CS ................................................................................................... RF 2-Hacid_dh 103 ................................................................................................... 102 *************************************************************************************************** PP .............................................................................T-BHHHHHHHHHHHHHHHHHHH CS .............................................................................xxxxxxxxxxxxxxxxxxxxxx RF 2-Hacid_dh 103 .............................................................................aaTeeaqeniaeeaaenlvafl 124 a+T e +++++e++++ +++++ 3609 218 teesyhmigsaeiakmkdgvylsnsargalideeamiaglqsgkiaglgtdvleeepgrknhpylafenvvmtphtsAYTMECLQAMGEKCVQDVEDVV 316 *************************************************************************************************** PP TTCCGTTBC CS xxxxxxxxx RF 2-Hacid_dh 125 kgespanav 133 +g p+ v 3609 317 QGILPQRTV 325 **7777655 PP >> 5374 # score bias c-Evalue i-Evalue hmmfrom hmm to alifrom ali to envfrom env to acc --- ------ ----- --------- --------- ------- ------- ------- ------- ------- ------- ---- 1 ! 37.0 0.0 2.6e-13 3.6e-10 21 103 .. 24 195 .. 5 310 .. 0.71 Alignments for each domain: == domain 1 score: 37.0 bits; conditional E-value: 2.6e-13 EEEEESCSSHHHCHHHHTT.ESEEEE-TTS-BSHHHHHHH.TT--EEEESSSSCTTB-HHHHHHTT-EEEE-TTTTHHHHHHHH............... CS xxxxxxxxxxxxxxxxxxx.xxxxxxxxxxxxxxxxxxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx............... RF ++ + +t+ l ++ ++ +++++ + +++ ++l+ + Lk+i+++++G D D+d ++e+Gil++n g ++ s+ E++ 4444446666655555555344555555.559999998888************************************************9999888888 PP .........................................................................T CS .........................................................................x RF 2-Hacid_dh 103 .........................................................................a 103 5374 122 qqqmqhtwnqtapsyqqlsgqkmlivgtgqigqqlakfakglnlqvygvntsghvtegfiecysqknmskiihE 195 88888888888888888888888888888888888888888888888887777777775554444444444440 PP [...] To write the parser I made extensive use of the detailed output description in the HMMER user guide (search for “Step 2: search the sequence database with hmmsearch”) for the relevant section. I won’t reiterate the meaning of each line, so check the guide if you’re keen. The most important thing to note is that this more verbose output contains the same tabular information as the first output (albeit in more scattered fashion), and additionally has alignments of the HMM profile to the query gene. Consider this section of the output: Query: 2-Hacid_dh [M=134] Accession: PF00389.29 Description: D-isomer specific 2-hydroxyacid dehydrogenase, catalytic domain Scores for complete sequences (score includes all domains): --- full sequence --- --- best 1 domain --- -#dom- E-value score bias E-value score bias exp N Sequence Description ------- ------ ----- ------- ------ ----- ---- -- -------- ----------- 8.8e-20 68.1 0.1 1.2e-19 67.7 0.1 1.1 1 3609 2e-10 37.8 0.0 3.6e-10 37.0 0.0 1.5 1 5374 I will refer to this as the domain hits summary. In this case, the IPfam PF00389.29 hit to 2 user genes: 3609 and 5374. Then, for each of these genes, we can find further down in the file the alignment of each. For example, here is where the alignment info for gene 3609 starts: >> 3609 # score bias c-Evalue i-Evalue hmmfrom hmm to alifrom ali to envfrom env to acc --- ------ ----- --------- --------- ------- ------- ------- ------- ------- ------- ---- 1 ! 67.7 0.1 8.7e-23 1.2e-19 9 133 .. 20 325 .. 8 326 .. 0.95 Alignments for each domain: == domain 1 score: 67.7 bits; conditional E-value: 8.7e-23 HHHHHHHHH.TTE.EEEEESCSSHHHCHHHHTTESEEEE-TTS-BSHHHHHHHTT--EEEESSSSCTTB-HHHHHHTT-EEEE-TT.TTHHHHHHHH.. CS xxxxxxxxx.xxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.xxxxxxxxxx.. RF +e l +l++ ++v +++v +e+ +el+e ++++ ++i ++ +t+e++e+ ++L +i+r+g+G++n+Dldaak++ +V+ +p+ + ++vAE 3609 20 PEHLTRLEKiGTVkHFTVDSEIGGKELAECLQGYTIIIASVTPFFTKEFFEHKDELLLISRHGIGYNNIDLDAAKQHDTIVSIIPAlVERDAVAENNvt 118 667777888655556777779*****************************************************************777889***999* PP [...] I will refer to this as the hit alignment. The aggregation of all the domain hit summaries and the hit alignments are stored in two datastructures that are attributes of the anvio.parsers.hmmer.HMMERStandardOutput parser class. I will now describe both of them: The first is dom_hits, which is a dataframe which looks like this: pfam_name pfam_id corresponding_gene_call domain qual score bias c-evalue i-evalue hmm_start hmm_stop hmm_bounds ali_start ali_stop ali_bounds env_start env_stop env_bounds mean_post_prob match_state_align comparison_align sequence_align version 0 Beta_elim_lyase PF01212 1762 1 ! 20.9 0.1 1e-08 3.5e-06 33 169 .. 44 177 .. 34 215 .. 0.72 tvnrLedavaelfgke..aalfvpqGtaAnsill.kill.qr..geevivtepahihfdetgaiaelagvklrdlknkeaGkmdlekleaaikevgaheekiklisltvTnntagGqvvsleelrevaaiakkygiplhlDgA ++ +++ael+ + f+ Gt +++ l + + +r g+ +i++ h +et + g +l ++ +++G +++e+l+++i++ e i + +++v n+ G++ +++e+ ev +a+ +i++h+D+ LLQQARKQIAELINVSanEIYFTSGGTEGDNWVLkGTAIeKRefGNHIIISAVEHPAVTETAEQLVELGFELSYAPVDKEGRVKVEELQKLIRK—–ETILVSVMAVNNE–VGTIQPIKEISEV–LAEFPKIHFHVDAV 20 1 PAPS_reduct PF01507 1541 1 ! 36.1 0.1 3.6e-13 1.3e-10 2 164 .. 21 231 .. 20 234 .. 0.79 lvvsvsgGkdslVllhLalkafkpv….pvvfvdtghefpetiefvdeleeryglrlkvyepeeevaekinaekhgs.slyee.aaeriaKveplkk……………………………aLekldedall..tGaRrdesksraklpiveidedfek………slrvfPllnWteedvwqyilrenipynpLydqgfr + +s+sgGkds +++La + ++ ++ ++ + ++ t++f++++e+ +++ +++ ++++ + + +++ + + + e+ + p k e++ ++a+ +G+R++es +r++ +++ +++ + ++Pl++W+ d+w+ + +++yn +y++ ++ VYFSFSGGKDSGLMVQLANLVAEKLdrnfDLLILNIEANYTATVDFIKKIEQLPRVKNIYHFCLPFFEDNNTSFFQPQwKMWDPsEKEKWIHSLP–KnaitleniddglkkyyslsngnpdrflryfqnwYKEQYPQSAIScgVGIRAQESLHRHSAVTKGENKYKNRcwinitlegNILFYPLFDWKVGDIWAATFKCELEYNYIYEKMYK 18 2 Ank_2 PF12796 1756 1 ! 32.2 0 6.7e-12 2.3e-09 29 84 .] 74 135 .. 53 135 .. 0.85 aLhyAakngnleivklLle…h.a..adndgrtpLhyAarsghleivklLlekgadinlkd aL Aa + +++ vk +l+ + + +d +g+tpL +A+ ++ +ei+k L+++gadinl++ ALLEAANQRDTKKVKEILQdttYqVdeVDTEGNTPLNIAVHNNDIEIAKALIDRGADINLQN 6 3 Ank_2 PF12796 1756 2 ! 28.5 0 9.5e-11 3.3e-08 22 75 .. 199 265 .. 195 267 .] 0.76 pn..k.ngktaLhyAak..ngnl…eivklLleha…..adndgrtpLhyAarsghleivklLle ++ + +g taL+ A+ +gn +ivklL+e++ dn+grt++ yA ++g++ei k+L + IDfqNdFGYTALIEAVGlrEGNQlyqDIVKLLMENGadqsiKDNSGRTAMDYANQKGYTEISKILAQ 6 4 IGPS PF00218 1615 1 ! 20.6 0.1 1.2e-08 4e-06 202 249 .. 195 242 .. 73 248 .. 0.88 LaklvpkdvllvaeSGiktredveklkeegvnafLvGeslmrqedvek +++lv+++++++ae i+t+e+++++k+ gv ++ vG +++r ++ +k IKQLVQENICVIAEGKIHTPEQARQIKKLGVAGIVVGGAITRPQEIAK 20 5 Ribosomal_L33 PF00471 1562 1 ! 66.6 1.5 1.1e-22 3.7e-20 2 47 .] 4 49 .] 3 49 .] 0.97 kvtLeCteCksrnYtttknkrntperLelkKYcprcrkhtlhkEtK +++LeC e+++r Y t+knkrn+perLelkKY p++r++ ++kE K NIILECVETGERLYLTSKNKRNNPERLELKKYSPKLRRRAIFKEVK 19 6 Ribosomal_S14 PF00253 1565 1 ! 83.3 0.1 3.9e-28 1.3e-25 2 54 .] 36 88 .. 35 88 .. 0.98 laklprnssptrirnrCrvtGrprGvirkfgLsRicfRelAlkgelpGvkKaS laklpr+s+p+r+r r++ +GrprG++rkfg+sRi+fRel ++g +pGvkKaS LAKLPRDSNPNRLRLRDQTDGRPRGYMRKFGMSRIKFRELDHQGLIPGVKKAS 20 7 Polysacc_synt_C PF14667 1593 1 ! 61.4 19.2 5.4e-21 1.9e-18 2 139 .. 371 516 .. 370 519 .. 0.83 LailalsiiflslstvlssiLqglgrqkialkalvigalvklilnllliplfgivGaaiatvlallvvavlnlyalrrllgikl…llrrllkpllaalvmgivvylllllllglllla…al..alllavlvgalvYllllll L+ ++s+ +l+++t++ siLq+l +k+a+ ++ i++l+kli+++++i+lf +G +iat+++ ++++++ +++l+r++ i++ ++ +++ +++vm i+ +l+l+++ ++ + +l + l +++g++v+ + l++ LSATIISTSLLGIFTIVLSILQALSFHKKAMQITSITLLLKLIIQIPCIYLFKGYGLSIATIICTMFTTIIAYRFLSRKFDINPikyNRKYYSRLVYSTIVMTILSLLMLKIISSVYKFEstlQLffLISLIGCLGGVVFSVTLFR 5 It stores all the domain hit information in the domain hit summaries as well as the sequence of the consensus match states, the comparison string, and the sequence of the user gene. This table contains all essential information, however the alignment information is in the form of raw strings (see match_state_align, comparison_align, and sequence_align columns). To further process the alignment info into a more meaningful form, there is a second data structure called ali_info. ali_info is a nested dictionary. It looks like this: ali_info = { # here is the template <gene_callers_id>: { (<pfam_id>, <domain_id>): { <a_dataframe_with_alignment_info> }, }, # here is an example 1699: { ('PF07972', 1): { <a_dataframe_with_alignment_info> }, ('PF07972', 2): { <another_dataframe_with_alignment_info> }, ('PF03383', 1): { <yet_another_dataframe_with_alignment_info> }, }, } Each dataframe contains detailed alignment info for a given hit. In the above example, if we wanted to access alignment info about the hit of PF00389.29 against the user gene with ID 3609, we would access the dataframe via ali_info[3609][('PF00389', 1)]. Since PF00389 hit only once to 3609, the hit has the domain ID 1. This entry in the nested dictionary is a dataframe that looks like this: seq hmm comparison seq_positions hmm_positions 0 P E + 19 8 1 E E E 20 9 2 H E   21 10 3 L L L 22 11 4 T E   23 12 5 R L + 24 13 6 L L L 25 14 7 E K + 26 15 This dataframe is a per-residue characterization of the alignment strings, i.e. the columns match_state_align, sequence_align, and comparison_align found in dom_hits. For convenience, positions in the alignment that contain a gap either in the HMM or in the gene sequence are stripped from this dataset. seq_positions correspond to the 0-indexed positions in the user gene (in this example the gene is 3609). In other words, these are the codon_order_in_gene values that aligned to a non-gap position in the HMM. Why doesn’t seq_positions start at 0? Because in this example, the hit starts at the 20th amino acid in the user gene. Sanity check: this means the ali_start entry in the corresponding row of self.dom_hits would be 20. Together, these 2 data structures are the fruits of labor from parsing the HMMER standard out and provide generic utility that extends beyond just anvi-run-interacdome. ## (5) Filtering HMM hits ### Pfam gathering threshold As previously mentioned, hmmsearch is ran with the --cut_ga flag, which uses Pfam gathering thresholds to pre-filter a lot of poor-quality hits. ### Filtering partial hits In the InteracDome paper, Kobren and Singh are more stringent about filtering hits than I decided to be. In the paper, it is demanded that 100% of the Pfam HMM maps to the user gene, i.e. the first and last position of the HMM hit should be non-gap characters, or else the hit is discarded. In other words, they discarded all partial hits. I have opted to relax this constraint since it seemed too strict for my applications, however, I did want to enforce some kind of threshold that would discard hits that are very partial. To quantify the partialness of a hit I defined the hit fraction quantity, which is defined as $f = K/L$ where $K$ is the alignment length (hmm_stop - hmm_start) and $L$ is the length of the HMM profile (the number of match states). For example, in the hit below, PF01212 has length of $293$ (not shown below, but trust me), and the alignment length is $169-33=136$. This means the hit fraction is $f = 136/293 \approx 0.46$. pfam_name pfam_id corresponding_gene_call domain qual score bias c-evalue i-evalue hmm_start hmm_stop hmm_bounds ali_start ali_stop ali_bounds env_start env_stop env_bounds mean_post_prob match_state_align comparison_align sequence_align version 0 Beta_elim_lyase PF01212 1762 1 ! 20.9 0.1 1e-08 3.5e-06 33 169 .. 44 177 .. 34 215 .. 0.72 tvnrLedavaelfgke..aalfvpqGtaAnsill.kill.qr..geevivtepahihfdetgaiaelagvklrdlknkeaGkmdlekleaaikevgaheekiklisltvTnntagGqvvsleelrevaaiakkygiplhlDgA ++ +++ael+ + f+ Gt +++ l + + +r g+ +i++ h +et + g +l ++ +++G +++e+l+++i++ e i + +++v n+ G++ +++e+ ev +a+ +i++h+D+ LLQQARKQIAELINVSanEIYFTSGGTEGDNWVLkGTAIeKRefGNHIIISAVEHPAVTETAEQLVELGFELSYAPVDKEGRVKVEELQKLIRK—–ETILVSVMAVNNE–VGTIQPIKEISEV–LAEFPKIHFHVDAV 20 To apply a filter based on partialness, I created a threshold value, self.min_hit_threshold (specified with --min-hit-fraction) that the hit fraction of a hit must exceed in order to be kept. If self.min_hit_threshold = 0.5, then the above hit would be removed since its hit fraction is less than this value ($\approx 0.46$). To determine a reasonable default value for this threshold, I calculated the hit fractions for all hits of an E. faecalis genome against the IPfams which yielded 2898 hits. Here is the resulting histogram of hit fractions: From this histogram, I deemed 0.8 was a good compromise between data retention and quality, so that is the default on July 21st, 2020. ### Filtering bad hits with information content Really, filtering bad hits is what the Pfam gathering threshold is designed for. But to be extra sure that we are not including junk, I followed the protocol of the InteracDome paper. To understand what Kobren and Singh did, we must first make a one paragraph detour to define some terms. An HMM profile is composed of match states, one for each residue position in the HMM profile. And each match state is composed of 20 emission probabilities, one for each amino acid. An emission probability gives a probability that a given amino acid is observed for the match state. So match states which are highly conserved will have a single dominant emission probability, whereas an unconserved match state will have more uniformly distributed emission probabilities, meaning many amino acids can inhabit the match state. To quantify how conserved a match state is, there is a quantity used in sequence logos called information content (IC), which compares the entropy of a perfectly uniform distribution of emission probabilities to the observed entropy of emission probabilities. The formula for IC boils down to: $\text{IC} = \log_2{20} + \sum_{i=1}^{20}{ f_i \log_2{f_i} }$ where $f_i$ is the emission probability of the $i$th amino acid. The equation’s worth boils down to this: the higher the information content, the more conserved the match state is. With this in mind, Kobren and Singh identified each match state in a hit that had an IC exceeding 4 (very conserved) and took note of the consensus amino acid (the amino acid with the highest emission probability). Then, in order to retain a hit, the gene sequence this HMM profile hit to must share all the same amino acids as the consensus amino acids at each of these conserved positions. The idea is that if one is to trust the quality of the hit, these positions should equate since they are conserved. I wanted to replicate this filtering procedure. This is great and all, except the HMMER output does not provide emission probabilities, and therefore calculating IC is a non-trivial task… #### Calculating information content To calculate information content (IC), one must dive into the .hmm file itself (the one created during anvi-setup-interacdome). An .hmm file contains all the information necessary about each HMM profile. Here is the content for just one HMM profile in the .hmm: HMMER3/f [3.1b2 | February 2015] NAME SnoaL_2 ACC PF12680.6 DESC SnoaL-like domain LENG 102 ALPH amino RF no MM no CONS yes CS yes MAP yes DATE Fri Jan 20 17:09:36 2017 NSEQ 346 EFFN 15.065170 CKSUM 2037425494 GA 27.00 27.00; TC 27.00 27.00; NC 26.90 26.90; BM hmmbuild HMM.ann SEED.ann SM hmmsearch -Z 26740544 -E 1000 --cpu 4 HMM pfamseq STATS LOCAL MSV -9.2504 0.71708 STATS LOCAL VITERBI -10.2672 0.71708 STATS LOCAL FORWARD -3.5453 0.71708 HMM A C D E F G H I K L M N P Q R S T V W Y m->m m->i m->d i->m i->i d->m d->d COMPO 2.34288 4.65458 2.72525 2.62519 3.03686 2.67067 3.52524 3.01724 3.00945 2.64165 3.76982 3.29469 3.35980 3.23149 2.70953 2.90053 2.80605 2.55611 4.08669 3.52142 2.68618 4.42225 2.77519 2.73123 3.46354 2.40513 3.72494 3.29354 2.67741 2.69355 4.24690 2.90347 2.73739 3.18146 2.89801 2.37887 2.77519 2.98518 4.58477 3.61503 0.05491 6.34906 2.96263 0.61958 0.77255 0.00000 * 1 1.85458 4.97177 5.25772 5.11150 3.08317 4.68531 4.99660 2.29317 4.89333 2.21381 3.63812 4.70077 5.28514 5.02200 4.89677 3.94172 3.72771 0.88269 4.66437 3.70054 1 v - - H 2.68618 4.42225 2.77519 2.73123 3.46354 2.40513 3.72494 3.29354 2.67741 2.69355 4.24690 2.90347 2.73739 3.18146 2.89801 2.37887 2.77519 2.98518 4.58477 3.61503 0.00274 6.29689 7.01924 0.61958 0.77255 0.61121 0.78240 2 2.68388 5.69592 2.79379 1.70132 5.25800 3.51515 3.94249 4.36550 2.81371 3.15390 5.18636 3.57935 4.80095 2.41963 1.52011 3.04363 2.67472 3.24049 6.57819 4.76611 2 r - - H 2.68618 4.42225 2.77519 2.73123 3.46354 2.40513 3.72494 3.29354 2.67741 2.69355 4.24690 2.90347 2.73739 3.18146 2.89801 2.37887 2.77519 2.98518 4.58477 3.61503 0.00268 6.31868 7.04103 0.61958 0.77255 0.65639 0.73131 3 1.69557 6.15246 2.61454 2.25344 4.80524 3.13834 3.50575 4.98574 2.35803 3.99893 4.81684 3.13184 4.80553 2.73142 1.63172 3.26000 2.93426 4.54380 6.58436 4.54785 3 r - - H 2.68618 4.42225 2.77519 2.73123 3.46354 2.40513 3.72494 3.29354 2.67741 2.69355 4.24690 2.90347 2.73739 3.18146 2.89801 2.37887 2.77519 2.98518 4.58477 3.61503 0.00981 6.32551 4.83198 0.61958 0.77255 0.48312 0.95934 [... removed this part to keep things reasonably sized ...] 100 1.71561 3.64865 4.38480 3.42760 4.07808 4.69231 4.80409 3.24425 2.74698 3.28918 4.57912 4.05114 5.07262 3.40708 2.94600 2.98074 1.91959 1.46391 4.53056 4.17751 268 v - - E 2.68618 4.42225 2.77519 2.73123 3.46354 2.40513 3.72494 3.29354 2.67741 2.69355 4.24690 2.90347 2.73739 3.18146 2.89801 2.37887 2.77519 2.98518 4.58477 3.61503 0.01668 6.34179 4.21465 0.61958 0.77255 0.56154 0.84473 101 2.40957 4.62174 3.16892 1.34079 4.55620 3.69164 2.87511 4.51265 2.91940 2.76532 5.18494 3.33723 4.80964 3.03069 2.08808 2.60615 3.78804 3.55867 4.81816 3.79971 269 e - - E 2.68618 4.42225 2.77519 2.73123 3.46354 2.40513 3.72494 3.29354 2.67741 2.69355 4.24690 2.90347 2.73739 3.18146 2.89801 2.37887 2.77519 2.98518 4.58477 3.61503 0.05884 6.32777 2.89402 0.61958 0.77255 0.69190 0.69440 102 2.66258 5.17554 4.55953 2.87507 3.15074 3.58098 2.17249 2.34555 3.60158 2.58045 3.32878 3.02642 5.07290 3.26130 3.35997 3.29569 3.03481 2.15305 2.79882 3.02577 270 v - - E 2.68618 4.42225 2.77519 2.73123 3.46354 2.40513 3.72494 3.29354 2.67741 2.69355 4.24690 2.90347 2.73739 3.18146 2.89801 2.37887 2.77519 2.98518 4.58477 3.61503 0.00189 6.27083 * 0.61958 0.77255 0.00000 * // The important part for IC is the section with lines that start from 1 and go to 102. Each of these numbers are the match states of the HMM profile–in this case the profile has 102 match states. On each line starting with one of these numbers are the negative natural log of the emission probabilities for each of the 20 amino acids. Therefore, by parsing this file by taking the negative exponentiation of these values, one can establish the emission probabilities for every match state in every HMM profile. To grab the IC values for each match state, I wrote a parser class, anvio.pfam.HMMProfile, that parses .hmm profiles. I tried to make something generic, not just for the single-minded purpose of calculating IC, so it is basic but also extensible for future purposes. From what I can tell, most of the information in basic .hmm formats is being captured. Alongside this, IC is calculated for each match state in a dataframe that can be accessed like so: print(anvio.pfam.HMMProfile(<filepath>).data['PF01965']['MATCH_STATES']) This prints the following: MATCH_STATE CS MM RF CONS MAP IC 0 E - - K 1 0.469514 1 E - - K 2 0.992492 2 E - - V 3 1.79484 3 E - - L 4 0.83366 4 E - - V 5 1.30396 5 E - - L 6 0.758542 6 E - - L 7 0.779465 148 E - - S 214 1.88744 149 S - - R 215 0.457319 150 S - - G 216 0.439365 151 G - - P 217 0.608714 152 G - - G 218 0.433156 153 G - - A 219 0.529335 154 H - - A 220 0.347968 155 H - - I 221 0.28936 The match states are 0-indexed in this dataframe. And so with this class, anvi’o has easy access to IC for each match state in an .hmm file. One need only initiate the above class as demonstrated. #### The distribution of information content Rather than testing for information content (IC) > 4, I wanted to keep the threshold a tunable parameter for the user (with a default being 4). To help guide anyone’s investigation, I have compiled all of the IC values for each match state in the representable InteracDome dataset (2375 HMM profiles). This totals 414619 IC values. Here is a histogram of them: Wow, an absolutely gorgeous long-tailed distribution exhibiting Pareto-like qualities. Here it is in log-log: I digress–it’s shape does not so much matter. But what I do find interesting is that only a very small portion of sites have IC > 4, which is the cutoff threshold Kobren and Singh applied. Only 0.062% of match states have IC > 4, and on average, each HMM profile contains only 0.11 such match states. To see the effect of applying this filter to real hits, I did a series of tests with decreasing cutoff thresholds of IC: total hits IC cutoff filtered 2898 4 0 2898 3.5 30 2898 3.0 151 2898 2.5 561 2898 2.0 1380 2898 1.5 2071 It is impossible to really say whether decreasing the IC cutoff below 4 is a good or bad idea without taking a close examination of hits that are filtered. I did not do this. What I do agree with is that IC > 4 match states are extremely rare, and thus represent the pinnacle of what should be conserved an ultra-conserved residue. If the sequence mismatches to the consensus value at these positions, we should definitely be justified in throwing the hit out. It is at this moment unclear if we could justifiably lower this threshold, or whether that would be throwing away useful data. Cysteine & glycine are stubborn Fun fact, this table shows the consensus of match states with IC > 4, partitioned by amino acid. AA Number C 115 G 79 P 16 D 15 H 15 R 4 W 4 K 2 N 2 E 2 Q 1 A 1 T 1 Cyteine and glycine are astonishingly more likely to be ultra-conserved than any other residue. ## (5) Matching binding frequencies to the genes’ residues ### The flow Once domain hits have been filtered, we are ready to attribute binding frequencies. The information flow of binding frequencies looks like this: [InteracDome] [HMMER] (binding frequency) ------------> (match state) ------> (user gene residue) Binding frequencies are stored in either the conserved or representable InteracDome table and are attributed to match states of the IPfams. This is a mapping of binding frequency to match state. By running HMMER, we extracted alignment information of match states to user genes. This creates a mapping between user gene residues and match states, and by extension a mapping between user gene residues and binding frequencies. Because of the legwork already described in (4), there is very little to describe. The most noteworthy thing to talk about here is how I dealt with multiple overlapping hits. ### Dealing with multiple overlapping hits Binding frequencies are held in a dataframe like this: gene_callers_id codon_order_in_gene pfam_id match_state ligand binding_freq 1 169 PF00534 22 ADP 0.687948 1 169 PF13692 8 ADP 0.595441 1 174 PF00534 27 ADP 0.735759 1 174 PF13692 14 ADP 0.595441 1 184 PF00534 37 ADP 0.0697656 1 184 PF13692 24 ADP 0.101399 1 186 PF00534 39 ADP 0.0697656 1 186 PF13692 26 ADP 0.101399 1 187 PF13692 27 ADP 0.201761 1 189 PF00534 47 ADP 0.0697656 This is a great format, because not only do you have binding frequencies associated with the residues of user gene sequences (codon_order_in_gene), you also can see the exact match states that contributed the binding frequency. But this has redundant entries. What I want is a single binding frequency (for a given ligand) associated to each codon_order_in_gene, yet this format allows redundancies. For example, further down the table there are these two entries: gene_callers_id codon_order_in_gene pfam_id match_state ligand binding_freq 1 167 PF00534 20 ALL_ 0.0979682 1 167 PF13692 6 ALL_ 0.103877 Here, we have an amino acid position that has two contributions, both from match_state 20 of pfam_id PF00534, and match_state 6 of pfam_id PF13692. To condense this information, I created a second dataframe that collapses the redundant rows. For simplicity, I am quite simply averaging all binding frequencies (for a given ligand) that associate to a given amino acid position. The above table is stored in the bind_freq attribute of anvio.interacdome.InteracDomeSuper, and the second, collapsed table is stored in the avg_bind_freq attribute of the same class. In the redundant case above, the corresponding entry in avg_bind_freq looks like this: gene_callers_id ligand codon_order_in_gene binding_freq 1 ALL_ 167 0.100922 match_state and pfam_id are dropped since these are no longer meaningful columns. However, their information is still held in bind_freq (and as you will see in the next section, bind_freq is stored as a tab-separated file so this potentially very useful information is not lost to the user). ### Filtering low binding frequency scores If users are not interested in positions with low binding frequency scores, they can apply a filter with --min-binding-frequency and all entries below this threshold will be removed. This is also a very convenient way to maintain reasonable table size for storage in the contigs database. Here is a histogram of non-zero binding frequencies from a test dataset: It’s clear from the above figure that even if the user is comfortable with discarding frequencies < 0.05, it has huge effect on table size. ## (6) Storing the per-residue binding frequencies into the contigs database The data stored in the contigs database is a variant of avg_bind_freq. As a reminder, here is what avg_bind_freq looks like this: gene_callers_id ligand codon_order_in_gene binding_freq And here is the corresponding section of what is ultimately stored in the contigs database: item_name data_key data_value data_type data_group It may seem to be in oddly specific format, and that’s because it is. You see, up until a few months ago, anvi’o contigs databases had no means of storing per-nucleotide or per-residue information. Crazy, I know. The only thing that resembled this concept was per-nucleotide coverage values, but this is actually stored in the auxiliary database (a tumor of the profile database) that’s created during anvi-profile. A few months ago I wanted to address this shortcoming for the selfish reason of knowing that one day I would implement InteracDome functionality into anvi’o. And I didn’t want anvi’o to spit out some half-chewed tab-separated file to the user–I wanted the results to be stored in the contigs database so they could be used in integrated and interactive manners. So I created two new tables in the contigs database called amino_acid_additional_data and nucleotide_additional_data, which hijack the already existing framework for additional data tables. The convenience factor was through the roof for doing this, as I had to write very little code (here is the pull request) and now we can store arbitrary per-nucleotide and per-amino-acid annotations (not just InteracDome stuff, but practically anything besides .mp4 format). However, two downsides are very appreciable. First, the framework requires a unique key for each amino acid. Yet, it is technically two keys that define an amino acid: the gene_callers_id of the gene it belongs to, and the codon_order_in_gene (the residue position) of the amino acid relative to its gene. The same goes for defining a nucleotide: you need the position in the contig (pos_in_contig) and you need the contig_name. So how do you get one key from two pieces of information? The solution, which is ugly, is to create a single key by concatenating these two pieces of information into a string. Doing so has distinct disadvantages, the main one being that SQL queries are hindered: it becomes impossible to straight-forwardly grab all amino acids belonging to a gene without first loading all entries into memory, for example. Regardless, I did it. Someone will hate me in 5 years. Take another look at the above table and you will see the string concatenation I’m talking about under the item_name column. Entries under item_name equal to <gene_callers_id>:<codon_order_in_gene>, for example the first entry 1:169 is the 169th residue (0-indexed) in gene 1. The second downside to doing this is specific to anvi-run-interacdome. You see, there is really a lot of information missing from the above table. For example, it would be great to know which IPfam ID(s) contributed a given binding freqency and with which of their match states they did so. I already discussed that this information is all present in the dataframe bind_freq. Unfortunately there just isn’t enough columns to accomodate this information, and I am fixed within this framework because I didn’t want to reinvent the wheel. To console victims of this tyranny, anvi-run-interacdome does more than just store the above table in the contigs database. It also outputs two tab-delimited files. Their names are by default INTERACDOME-match_state_contributors.txt and INTERACDOME-domain_hits.txt INTERACDOME-match_state_contributors.txt is a table of the exact contents in bind_freq mentioned above. Since you’re somehow still reading this, I’ll show you the table again: gene_callers_id codon_order_in_gene pfam_id match_state ligand binding_freq 1 169 PF00534 22 ADP 0.687948 1 169 PF13692 8 ADP 0.595441 1 174 PF00534 27 ADP 0.735759 1 174 PF13692 14 ADP 0.595441 1 184 PF00534 37 ADP 0.0697656 1 184 PF13692 24 ADP 0.101399 1 186 PF00534 39 ADP 0.0697656 1 186 PF13692 26 ADP 0.101399 1 187 PF13692 27 ADP 0.201761 1 189 PF00534 47 ADP 0.0697656 From this, one can trace back each and every match state that contributed to a given binding frequency. This is what giving power to the user means. INTERACDOME-domain_hits.txt is a table of the exact contents in dom_hits mentioned above. Since you’re somehow still reading this, I’ll show you the table again: pfam_name pfam_id corresponding_gene_call domain qual score bias c-evalue i-evalue hmm_start hmm_stop hmm_bounds ali_start ali_stop ali_bounds env_start env_stop env_bounds mean_post_prob match_state_align comparison_align sequence_align version Beta_elim_lyase PF01212 1762 1 ! 20.9 0.1 1e-08 3.5e-06 33 169 .. 44 177 .. 34 215 .. 0.72 tvnrLedavaelfgke..aalfvpqGtaAnsill.kill.qr..geevivtepahihfdetgaiaelagvklrdlknkeaGkmdlekleaaikevgaheekiklisltvTnntagGqvvsleelrevaaiakkygiplhlDgA ++ +++ael+ + f+ Gt +++ l + + +r g+ +i++ h +et + g +l ++ +++G +++e+l+++i++ e i + +++v n+ G++ +++e+ ev +a+ +i++h+D+ LLQQARKQIAELINVSanEIYFTSGGTEGDNWVLkGTAIeKRefGNHIIISAVEHPAVTETAEQLVELGFELSYAPVDKEGRVKVEELQKLIRK—–ETILVSVMAVNNE–VGTIQPIKEISEV–LAEFPKIHFHVDAV 20 PAPS_reduct PF01507 1541 1 ! 36.1 0.1 3.6e-13 1.3e-10 2 164 .. 21 231 .. 20 234 .. 0.79 lvvsvsgGkdslVllhLalkafkpv….pvvfvdtghefpetiefvdeleeryglrlkvyepeeevaekinaekhgs.slyee.aaeriaKveplkk……………………………aLekldedall..tGaRrdesksraklpiveidedfek………slrvfPllnWteedvwqyilrenipynpLydqgfr + +s+sgGkds +++La + ++ ++ ++ + ++ t++f++++e+ +++ +++ ++++ + + +++ + + + e+ + p k e++ ++a+ +G+R++es +r++ +++ +++ + ++Pl++W+ d+w+ + +++yn +y++ ++ VYFSFSGGKDSGLMVQLANLVAEKLdrnfDLLILNIEANYTATVDFIKKIEQLPRVKNIYHFCLPFFEDNNTSFFQPQwKMWDPsEKEKWIHSLP–KnaitleniddglkkyyslsngnpdrflryfqnwYKEQYPQSAIScgVGIRAQESLHRHSAVTKGENKYKNRcwinitlegNILFYPLFDWKVGDIWAATFKCELEYNYIYEKMYK 18 Ank_2 PF12796 1756 1 ! 32.2 0 6.7e-12 2.3e-09 29 84 .] 74 135 .. 53 135 .. 0.85 aLhyAakngnleivklLle…h.a..adndgrtpLhyAarsghleivklLlekgadinlkd aL Aa + +++ vk +l+ + + +d +g+tpL +A+ ++ +ei+k L+++gadinl++ ALLEAANQRDTKKVKEILQdttYqVdeVDTEGNTPLNIAVHNNDIEIAKALIDRGADINLQN 6 Ank_2 PF12796 1756 2 ! 28.5 0 9.5e-11 3.3e-08 22 75 .. 199 265 .. 195 267 .] 0.76 pn..k.ngktaLhyAak..ngnl…eivklLleha…..adndgrtpLhyAarsghleivklLle ++ + +g taL+ A+ +gn +ivklL+e++ dn+grt++ yA ++g++ei k+L + IDfqNdFGYTALIEAVGlrEGNQlyqDIVKLLMENGadqsiKDNSGRTAMDYANQKGYTEISKILAQ 6 IGPS PF00218 1615 1 ! 20.6 0.1 1.2e-08 4e-06 202 249 .. 195 242 .. 73 248 .. 0.88 LaklvpkdvllvaeSGiktredveklkeegvnafLvGeslmrqedvek +++lv+++++++ae i+t+e+++++k+ gv ++ vG +++r ++ +k IKQLVQENICVIAEGKIHTPEQARQIKKLGVAGIVVGGAITRPQEIAK 20 Ribosomal_L33 PF00471 1562 1 ! 66.6 1.5 1.1e-22 3.7e-20 2 47 .] 4 49 .] 3 49 .] 0.97 kvtLeCteCksrnYtttknkrntperLelkKYcprcrkhtlhkEtK +++LeC e+++r Y t+knkrn+perLelkKY p++r++ ++kE K NIILECVETGERLYLTSKNKRNNPERLELKKYSPKLRRRAIFKEVK 19 Ribosomal_S14 PF00253 1565 1 ! 83.3 0.1 3.9e-28 1.3e-25 2 54 .] 36 88 .. 35 88 .. 0.98 laklprnssptrirnrCrvtGrprGvirkfgLsRicfRelAlkgelpGvkKaS laklpr+s+p+r+r r++ +GrprG++rkfg+sRi+fRel ++g +pGvkKaS LAKLPRDSNPNRLRLRDQTDGRPRGYMRKFGMSRIKFRELDHQGLIPGVKKAS 20 Polysacc_synt_C PF14667 1593 1 ! 61.4 19.2 5.4e-21 1.9e-18 2 139 .. 371 516 .. 370 519 .. 0.83 LailalsiiflslstvlssiLqglgrqkialkalvigalvklilnllliplfgivGaaiatvlallvvavlnlyalrrllgikl…llrrllkpllaalvmgivvylllllllglllla…al..alllavlvgalvYllllll L+ ++s+ +l+++t++ siLq+l +k+a+ ++ i++l+kli+++++i+lf +G +iat+++ ++++++ +++l+r++ i++ ++ +++ +++vm i+ +l+l+++ ++ + +l + l +++g++v+ + l++ LSATIISTSLLGIFTIVLSILQALSFHKKAMQITSITLLLKLIIQIPCIYLFKGYGLSIATIICTMFTTIIAYRFLSRKFDINPikyNRKYYSRLVYSTIVMTILSLLMLKIISSVYKFEstlQLffLISLIGCLGGVVFSVTLFR 5 From this, one can learn about how good a given hit was, the alignment of the user gene to the HMM match states, and hopefully anything else the user would be interested in. ## Conclusion Wow! That was certainly a mouthful. I know in a year or two when I am writing my thesis, this will be helpful to me, and I hope that it has been helpful to you too. If you have questions, let me know. And if you want to see all of this in action, be sure to check out the InteracDome section of the infant gut tutorial.
2021-04-15 16:21:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5496276021003723, "perplexity": 4677.725793981152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038087714.38/warc/CC-MAIN-20210415160727-20210415190727-00553.warc.gz"}
http://cs.stackexchange.com/questions
# All Questions 5 views 26 views ### Lamport's Byzantine Generals Algorithm I've stumbled at the first OralMessage algorithm in Lamport, et al's paper. I've searched the web and there are dozens of sites, restating in exactly the same terms and examples, which isn't helping ... 50 views ### Every language that is reducible to a language in $\Sigma_i^p$ is also in $\Sigma_i^p$ . How? The complexity class $\Sigma_{k}^{p}$ is recursively defined as follows: \begin{align} \Sigma_{0}^{p} & := P, \\ \Sigma_{k+1}^{p} & := P^{\Sigma_{k}^{p}}. \end{align} Why is every language ... 34 views ### How to reformulate my problem as a mixed-integer quadratic problem I have an unknown $n$-dimensional vector $x$ whose analytical expression depends on the following sum $x = z + Ba$ where the vector $z$ and the matrix $B\in \mathbb{R}^{n\times s}$ are given. So the ... 41 views ### Binary Palindrome Let an integer n be given. Write the integers from 1 to n in binary notation successively from left to right. In the resulting string consisting of zeros and ones, choose a palindrome substring of ... 30 views ### Power method to calculate eigenvectors I've implemented a program for computing eigenvectors of some random, symmetric, $N$x$N$ matrix using the power method. I have found difficulty in calculating all $N$ eigenvectors consistently, ... 27 views include include using namespace std; int main() { ... 19 views ### How to Convert Row Major Order to Column Major Order? [on hold] If M is an n×n array stored in contiguous memory in row major order, how would one write pseudocode converting M to be stored in column major order taking constant space? 130 views ### Does there exist a data compression algorithm that uses a large dataset distributed with the encoder/decoder? If my goal were to compress say 10,000 images and I could include a dictionary or some sort of common database that the compressed data for each image would reference, could I use a large dictionary ... 136 views ### Suboptimal Solution for a combinatorial problem I have a cost function $f(X)=\|\hat{X}-X\|_2$ to minimize which depends on a $s\times s$ matrix $X$ where $\hat{X}$ is given and $\|X\|_2=\big(\sum_{i,j}x_{ij}^2\big)^{1/2}$. This matrix $X$ is ... 46 views ### Good beginner source to learn about how computers work [duplicate] I am a senior year high school student. I love programming and my chosen language is Java. I'm very much into high-level programming and Object Oriented design. Recently, I started to show first ... 42 views ### Regarding the Turing Machine I am a high school student in the twelfth grade. I study high-level programming, and a little bit of basic computer science. I have recently started to understand what a Turing Machine is. I wanted ... 34 views ### The membership problem for Type 0 grammars I have trouble solving the following problem: (i) Prove that for all finite sets $S$ of context free grammars, there exists a Turing machine $M$ such that for all grammars $G_1,G_2\in S$, we have ... 29 views ### Understanding decidability Sorry this is a basic question to understand decidability. It is the first time I see it in my undergrad course. 1) I am reading why the language AFDA is decidable ... 38 views ### How to represent a 0-valid boolean formula? I read in these two papers http://www.ccs.neu.edu/home/lieber/courses/csg260/f06/materials/papers/max-sat/p216-schaefer.pdf and http://people.csail.mit.edu/madhu/papers/noneed/fullbook.ps that if we ... 41 views ### How can I have constant time initialization for multi-dimensional arrays? For a regular array, I understand that if we have the tradeoff of space vs time, and we use more space to implement a Which, Data, and When pointers to the actual array, we can initialize the array in ... 11 views ### What's a good design pattern for bidirectional signals/events? [migrated] This problem feels rather basic, yet I've never known a great solution. I'm looking for a way for components in an application to notify each other while being as decoupled as possible (both at ... 26 views ### Solving for the matrix $W$ in an equation involving $W \cdot W^{T}$ Having large matrices, $W$ (the unknown) and $M$ (known), is it possible to solve for $W$ in this equation $$W \cdot W^{T} = M,$$ where $M$ can have negative entries. 15 views ### What would be a good sized project for a student to show to an employer to get a first job / co-op job [on hold] I'm a computer Science student interested in programming and software engineering who is looking for a co-op position through my school. I was wondering if anyone has gotten a Job as a student or ... 20 views ### Time Complexity Problem [duplicate] $$T(n) = 2\cdot \sqrt{n} \cdot T(\sqrt{n}) + \Theta ( n)$$ I have been trying to solve this question but I could not find anything. I tried to build recursion tree but I can not find the sum. Do you ... 21 views ### parallelizing $k$-means I'm having trouble thinking about the following. If we have two machines 1 and 2 that evenly split a set of data points, does $k$-means separately, then averages the result, does this agree with just ... 8 views ### Library for Custom trees in java or c++ [on hold] Is there any library for building custom trees in java or c++. Essentially i want to construct a tree where each node could have an arbitrary number of children. Also if an iterator over the children ... 16 views ### Finding prime factors of non-random key generator I have been working on a challenge i found on the internet. It is as follows: You've stumbled onto a significant vulnerability in a commonly used cryptographic library. It turns out that the ... 52 views ### Language Recognition Devices and Language Generators I have few CS textbooks with me which discuss languages, well actually 2 plus old course notes supplied a few years ago. I have been searching the web too any only seem to come up with vague responses ... 13 views ### Automata Theory countably and uncountably infinite [duplicate] I'm going over some of the pre-requisite math regarding Automata theory, and finite representations. I read the following: If ∑ is a finite alphabet the set of all strings over the alphabet (∑*) is ... 87 views ### Why choose D* over Dijkstra? I understand the basis of A* as being a derivative of Dijkstra, however, I recently found out about D*. From Wikipedia, I can understand the algorithm. What I do not understand is why I would use D* ... 41 views ### Finite representations and programming languages Countably inifite I'm going over some of the pre-requisite math regarding Automata theory, and finite representations. I read the following: If ∑ is a finite alphabet the set of all strings over the alphabet (∑*) is ... 11 views ### Modelling 2 object situation with Propositional Logic I'm reading up on propositional logic, and I'm completely stuck on this example - spent the past few hours trying to figure it out! Any pointers would be appreciated. There's 2 trees, each with signs ...
2014-03-08 03:46:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8671931624412537, "perplexity": 887.8207863869677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999652934/warc/CC-MAIN-20140305060732-00071-ip-10-183-142-35.ec2.internal.warc.gz"}
https://gateoverflow.in/155684/discrete
51 views Let G be a connected 3 - regular graph. Each edge of G lies on some cycle. Let S⊆V and C1,C2,…,Cm,m=Codd(G−S), be the odd component of G−S. Let  eG(Ci,S) denote the number of edges with one- end in Ci and the other in S. Then ∑(i=1 to m)  eG(Ci−S) is (1) ≤m (2) ≥5m (3) ≥3m in Revision | 51 views
2020-01-23 04:42:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.922614336013794, "perplexity": 5152.407053564408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250608295.52/warc/CC-MAIN-20200123041345-20200123070345-00366.warc.gz"}
http://mathoverflow.net/feeds/question/28519
References for "modern" proof of Newlander-Nirenberg Theorem - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-18T20:59:19Z http://mathoverflow.net/feeds/question/28519 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/28519/references-for-modern-proof-of-newlander-nirenberg-theorem References for "modern" proof of Newlander-Nirenberg Theorem Spiro Karigiannis 2010-06-17T15:13:31Z 2012-05-23T23:49:51Z <p>Hi,</p> <p>I'm starting to prepare a graduate topics course on Complex and Kahler manifolds for January 2011. I want to use this course as an excuse to teach the students some geometric analysis. In particular, I want to concentrate on the Hodge theorem, the Newlander-Nirenberg theorem, and the Calabi-Yau theorem.</p> <p>I have many excellent references (and have lectured before) on the Hodge and CY theorems. However, for the Newlander-Nirenberg theorem, I am finding it hard to find a "modern" treatment. I recall going through the original paper in my graduate student days, but I hope that there is a more streamlined version floating around somewhere. (I want to consider the general smooth case, not the easy real-analytic version). Besides the original paper, so far I can only find these references:</p> <p>J. J. Kohn, "Harmonic Integrals on Strongly Pseudo-Convex Manifolds, I and II" (Annals of Math, 1963)</p> <p>and</p> <p>L. Hormander, "An introduction to complex analysis in several variables" (Third Edition, 1990)</p> <p>Both are easier to follow than the original paper. But my question is: are there any other proofs in the literature, preferably from books rather than papers? The standard texts on complex and Kahler geometry that I have looked at don't have this.</p> http://mathoverflow.net/questions/28519/references-for-modern-proof-of-newlander-nirenberg-theorem/28522#28522 Answer by Tim Perutz for References for "modern" proof of Newlander-Nirenberg Theorem Tim Perutz 2010-06-17T15:27:54Z 2012-05-23T23:49:51Z <p>It's covered in Demailly's too-little-known book, <a href="http://mathonline.andreaferretti.it/books/view/19/Complex-analytic-and-algebraic-geometry" rel="nofollow">Complex analytic and differential geometry,</a> though the proof given there is apparently modelled on the references you cited.</p> <p><b>Edit</b>: I just noticed that the MathOnline link currently seems to be non-functional, so here's a link to <a href="http://www-fourier.ujf-grenoble.fr/~demailly/books.html" rel="nofollow">Demailly's webpage</a>. </p> http://mathoverflow.net/questions/28519/references-for-modern-proof-of-newlander-nirenberg-theorem/30420#30420 Answer by Morris KaLka for References for "modern" proof of Newlander-Nirenberg Theorem Morris KaLka 2010-07-03T15:31:53Z 2010-07-03T15:31:53Z <p>There is a proof due to Malgrange which can be found in Nirenberg's, Lectures on Linear Partial Differential Equations. I am not sure that one can call the proof modern, but it is the simplest proof that I know.</p> http://mathoverflow.net/questions/28519/references-for-modern-proof-of-newlander-nirenberg-theorem/33334#33334 Answer by Simon Salamon for References for "modern" proof of Newlander-Nirenberg Theorem Simon Salamon 2010-07-25T22:02:40Z 2010-07-25T22:02:40Z <p>This is not quite an answer to your question, but you might consult the book by Donaldson and Kronheimer "The geometry of 4-manifolds". In chapter 2 they prove an integrability theorem for holomorphic vector bundles, the point being that this can be regarded as a simpler version of the Newlander-Nirenberg theorem, and (in my view) very suitable for your course. You might also want to mention the following simple example for instructional purposes: the nilpotent Lie group H^3 x R where H^3 is the Heisenberg group has an obvious left-invariant almost-complex structure whose Nijenhius tensor vanishes. Although not a <em>complex Lie group</em>, it is easy to find independent local complex coordinates z_1, z_2. I suspect that there are similar classes of almost-complex examples where the integration is elementary. </p> http://mathoverflow.net/questions/28519/references-for-modern-proof-of-newlander-nirenberg-theorem/33929#33929 Answer by Spiro Karigiannis for References for "modern" proof of Newlander-Nirenberg Theorem Spiro Karigiannis 2010-07-30T18:43:36Z 2010-07-30T18:43:36Z <p>In my continued searches for modern proofs of Newlander-Nirenberg, I found this great source: it's "Applications of Partial Differential Equations to Some Problems in Geometry", a set of lecture notes by Jerry Kazdan, which are available on his <a href="http://www.math.upenn.edu/~kazdan/" rel="nofollow">website</a> at UPenn. The proof of Newlander-Nirenberg in here is based on Malgrange's proof, which is also in Nirenberg's book (mentioned by Morris KaLka in his answer above), but these notes by Kazdan cover a <em>lot</em> of basic geometric analysis theorems, so they're an excellent resource. <em>I wish I knew about these when I was in graduate school...</em></p> http://mathoverflow.net/questions/28519/references-for-modern-proof-of-newlander-nirenberg-theorem/57169#57169 Answer by John Hubbard for References for "modern" proof of Newlander-Nirenberg Theorem John Hubbard 2011-03-02T20:35:06Z 2011-03-04T01:34:28Z <p>Hi Spiro: I have had much the same difficulties as you, but I now know a modern proof.</p> <p>At heart, the original proof is an application of the implicit function theorem. More specifically, let $U$ be a polydisk in $C^n$ consider the sequence of Banach manifolds $Diff^{k,\alpha}(U,C^n) \to AC^{k-1,\alpha}(U) \to (A^{0,2})^{k-2,\alpha}(U,TU).$</p> <p>These are respectively the diffeomorphisms $U\to C^n$ of class $(k,\alpha)$, the almost complex structures on $U$ of class $(k-1,\alpha)$ and the $(0,2)$ forms on $U$ with values in the holomorphic tangent bundle, of class $(k-2,\alpha)$. The first map is the pullback of the standard complex structure, and the second is the Frobenius integrability form $\phi \mapsto \overline \partial \phi - \frac 12 [\phi\wedge \phi].$ </p> <p>The object is to show that the first map is locally surjective onto the inverse image of $0$ by the second. These spaces are Banach manifolds, and if you can show that the sequence of derivatives (respectively at the identity, at the standard complex structure and at 0) is split exact, the result follows from the implicit function theorem.</p> <p>This sequence of derivatives is the Dolbeault sequence on $U$ (in the appropriate class), and it is split exact, though this is NOT obvious. There is an error in the original paper, or rather in the paper of Chern's that it depends on, but the result is true. The remainder of the mess in the original proof is due to the authors writing out the Picard iteration in the specific case, rather than isolating the needed result.</p> <p>I am working on getting this written up with Milena Pabiniak, a graduate student here at Cornell. Write me at jhh8@cornell.edu if you are interested in seeing details.</p> <p>John Hubbard</p>
2013-06-18 20:59:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8941399455070496, "perplexity": 888.211041246751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707187122/warc/CC-MAIN-20130516122627-00069-ip-10-60-113-184.ec2.internal.warc.gz"}
https://forum.bebac.at/forum_entry.php?id=22837
## build RGtk2 from its source on Windows 10/11 [🇷 for BE/BA] Dear bear users, Since RGtk2 has been retired from CRAN at the end (Dec. 15) of 2021, there will be no more updated binary file available for users to install on Windows platform. On Linux & macOS, we have to install GTK+ (GTK2) compiler or development packages before installing RGtk2. Then we can easily build RGtk2 package for Linux or macOS. Luckily we can find RGtk2 binary (zip) from cran.micorsoft.com. We used to install RGtk2 from its binary. Then run library(RGtk2) in R console as the system admin. There is a pop-up dialogue as if users want to install GTK+. If yes, then it will go to download a zip file and unzip this zip file and put all files under the folder of /R/x.x.x/library/RGtk2/gtk. All files under ~/gtk are used as the runtime for GTK+. Done. However, we all know R is updated frequently. After all, we need to re-build or re-compile RGtk2 package to see if it still runs well on the newly updated R. Here is how I built RGtk2 package from its sources on Windows 10/11. Procedures of installation an updated package Take RGtk2 as our example. When installing an updated RGtk2, R will move the original RGtk2 folder (~/library/RGtk2) under the newly created locked folder (e.g. ~/library/000RGtk2Lock) first and creates a new & empty folder (~/library/RGtk2). Meanwhile, there is a new folder 00new created under ~/library/000RGtk2Lock. Then R installs all newly built files under the folder 00new. If the installation succeeds, R will copy all files from the folder 00new to the folder of ~/library/RGtk2 and delete the folder of the locked folder. If installation fails, R will copy all files under ~/library/000RGtk2Lock/RGtk2 to ~/library/RGtk2 to restore the original RGtk2. It's a very good policy. Steps to install x64 RGtk2 (same for cairoDevice) on x64 Windows 10/11: 1. download and install R & RTOOLS40; 2. download GTK+ development packages: here we have two choices. One is from here (set as #1) & the other is we just mentioned above the runtime (set as #2). The #1 is mentioned in the file INSTALL in RGtk2 source code tarball. When we browse the link. We can find there are two *.exe. I chose gtk-dev-2.12.9-win32-2.exe. The #2 (gtk+-bundle_2.22.1-20101229_win64.zip) was coded in zzz.R. I recommend that #2 should be used. When I tried to use #1 to compile RGtk2, it fails to build x386. Because I have x64 Windows 10? I don't know. Yes, it can build x64 RGtk2. However, R said the built x64 RGtk2 is not a real W32 package. Actually, I even failed to build x64 RGtk2 with #1 at the beginning until I used an install option of "--no-test-load". That is the final step to build R package in R. If it passes the package loading test, a binary zip file will be generated. If fails, R will delete all built files and restore the original package back to ~/library. The binary zip file is not generated either. The reason to use the install option of "--no-test-load" is that after building RGtk2, GTK+ runtime has not been installed yet. Thus, R treats the whole procedure as a failure. I wonder the developers might use #1 in a 32-bit machine & 32-bit (x86) Windows OS. Also, #1 cannot serve as the runtime. Do I just answer my own question mentioned in my previous post? If we use #1 to build x64 RGtk2, then we need to download #2 as GTK+ runtime when running library(RGtk2) on R console. So I chose #2 as my GTK developer's pack. The #2 is not only GTK+ development pack but also is GTK runtime. 3. set the GTK developer's pack with #2: Unzip all files under a folder, e.g. C:\GTK and set it as the environment variable as GTK_PATH. OK, I followed the instruction. Unfortunately, I still got the errors (... cannot find glib.h...). I checked the error messages and found '... -mms-bitfields -I\$C:\GTK/include/gtk-2.0 ....'. The backslash looked weired. So I used 'C:/GTK' to set GTK_PATH. Yes, bingo. I finally built x64 RGtk2 successfully. However, I still need to dwonlaod & install gtk+-bundle_2.22.1-20101229_win64.zip to ~/library/RGtk2/gtk when first time running library(RGtk2). So I took a look on source codes. Then I found that if I could set the following Windows environment variables RGtk2 will not ask if want to install GTK+ when running library(RGtk2) for the first time: --- GTK = C:\GTK GTK_PATH = C:/GTK GTK_INCL = C:\GTK\INCLUDE GTK_LIB = C:\GTK\LIB INCLUDE = %GTK_INCL%;%GTK_INCL%\GTK-2.0;%GTK_INCL%\GLIB-2.0;%GTK_INCL%\PANGO-1.0;%GTK_INCL%\CAIRO;%GTK_INCL%\ATK-1.0;%GTK_LIB%\GTK-2.0\INCLUDE;%GTK_LIB%\GLIB-2.0\INCLUDE;%GTK_INCL%\LIBXML2;%GTK_INCL%\GDK-PIXBUF-2.0 LIB = C:\GTK\LIB PATH = C:\GTK\bin (add this line) --- 4. build x64 RGtk2. Two ways can do this. Both require the system admin to do it. Download RGtk2 source tarball from R archived repos. (RGtk2 v2.20.36.2) and place it under the R working folder. 1. Open R console as the system admin. and run the following codes:install.packages("RGtk2_2.20.36.2.tar.gz",repos=NULL,INSTALL_opts="--no-html --no-help --no-multiarch",type="source"). The install option of "--no-test-load" is omitted since the runtime has been installed. After the installation, run library(RGtk2). You can find that RGtk2 will not ask if install GTK+. 2. Open a command line (the terminal) as the system admin and go to R working folder, and type the command: R CMD INSTALL --no-help --no-html --no-multiarch --build RGtk2_2.20.36.2.tar.gz. A zip file RGtk2_2.20.36.2.zip is generated. This required the folder ~/R/x.x.x/bin in the PATH (x.x.x means the version # of R). The reason to use "--no-html" & "--no-help" is to speed the building processes. Also, the original developers did use "--no-html" too. It is easy to find out. Just open the developers' built RGtk2_2.20.36.2.zip. The folder of 'html' under RGtk2 was empty. With "--no-help", it reduced the total file size of RGtk2_2.20.36.2.zip from 15 MB to 4 MB. What a great help. The option of "--no-multiarch" is definitely required; otherwise, the building processes crashed when trying to build x386 RGtk2 (not until building x64 RGtk2). In conclusion, I think that it may not be practical to ask users to set up the GTK development pack to install RGtk2 package from its source by themselves. Thus, I decided to bundle RGtk2 & the GTK+ runtime. Also, I added more GTK+/GTK2 themes into the bundle. I will explain how to install/choose GTK+/GTK2 themes on Ubuntu/macOS/Windows 10 & 11 later. That's it. GTK: GIMP Tool Kit; GTK+ a.k.a GTK2; GTK has been evolved to GTK3/GTK4 on unix-like OS, such as Linux. All the best, -- Yung-jin Lee bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee Kaohsiung, Taiwan https://www.pkpd168.com/bear • build RGtk2 from its source on Windows 10/11yjlee168 2022-03-16 00:11 [🇷 for BE/BA]
2022-05-27 15:11:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5528929829597473, "perplexity": 6891.22906731379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662658761.95/warc/CC-MAIN-20220527142854-20220527172854-00429.warc.gz"}
https://www.striim.com/docs/en/creating-a-cluster-in-microsoft-windows.html
# Striim 3.9.8 documentation ##### Creating a cluster in Microsoft Windows 1. Download Striim_3.9.8.zip, extract it, and move the extracted striim directory to an appropriate location. 2. Start Windows PowerShell as administrator (right-click the Windows Powershell icon and select Run as administrator), change to the striim/conf/windowsService directory, and enter .\setupWindowsService.ps1. 3. Run striim\bin\sksConfig.bat and enter passwords for the Striim keystore and the admin and sys users. If hosting the metadata repository on Oracle or PostgreSQL, enter that password as well (see Configuring a DBMS to host Striim's metadata repository). If you are using a Bash or Bourne shell, characters other than letters, numbers, and the following punctuation marks must be escaped: , . _ + : @ % / - 4. If hosting the metadata repository on Derby, change its password as described in Changing the Derby password. 5. Edit striim\conf\startUp.properties, edit the following property values (removing any # characters and spaces from the beginning of the lines), and save the file: • WAClusterName: a name for the Striim cluster (note that if an existing Striim cluster on the network has this name, Striim will try to join it) • CompanyName: If you specify keys, this must exactly match the associated company name. If you are using a trial license, any name will work. • ProductKey and LIcenseKey: If you have keys, specify them, otherwise leave blank to run Striim on a trial license. Note that you cannot create a multi-server cluster using a trial license. • Interfaces: If the system has more than one IP address, specify the one you want Striim to use, otherwise leave blank and Striim will set this automatically. • If hosting the metadata repository on Derby and Derby is not running on port 1527, set the following properties: MetaDataRepositoryLocation=<IP address>:<port> DERBY_PORT=<port> • If hosting the metadata repository on Oracle, set the following properties: MetadataDb=oracle MetaDataRepositoryUname=striimrepo If you use an SID, the connection URL has the format jdbc:oracle:thin:@<IP address>:<SID>, for example, jdbc:oracle:thin:@192.0.2.0:orcl. If you use a service name, it has the format jdbc:oracle:thin:@<IP address>/<service name>, for example, jdbc:oracle:thin:@192.0.2.0/orcl. In a high availability active-standby or RAC environment, specify all servers, for example, MetaDataRepositoryLocation=jdbc:oracle:thin:@(DESCRIPTION_LIST=(LOAD_BALANCE=off)(FAILOVER=on)(DESCRIPTION= (CONNECT_TIMEOUT=5)(TRANSPORT_CONNECT_TIMEOUT=3)(RETRY_COUNT=3)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST= 192.0.2.100)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=racdb.localdomain)))(DESCRIPTION= (CONNECT_TIMEOUT=5)(TRANSPORT_CONNECT_TIMEOUT=3)(RETRY_COUNT=3)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST= 192.0.2.101)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=racdb.localdomain)))) (see Features Specific to JDBC Thin for more information.) If hosting the metadata repository on PostgreSQL, set the following properties: MetadataDb=postgres MetaDataRepositoryUname=striim The PostgreSQL connection URL has the format <IP address>:<port>/striimrepo, for example 192.0.2.100:5432/striimrepo. In a high availability environment, specify the IP addresses of both the primary and standby servers, separated by a comma, for example, 192.0.2.100,192.0.2.101:5432/striimrepo. 6. Optionally, perform additional tasks described in Configuring Striim, such as increasing the maximum amount of memory the server can use. 7. Start the Derby and Striim services manually, or reboot to verify that they start automatically. To uninstall the services, stop them, open a command prompt as administrator (the sc delete commands do not work in PowerShell), change to the striim\conf\windowsService\yajsw\bat\ directory, and enter the following commands: uninstallService sc delete Derby sc delete com.webaction.runtime.Server
2020-01-21 21:11:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2701569199562073, "perplexity": 10987.842240192387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250605075.24/warc/CC-MAIN-20200121192553-20200121221553-00085.warc.gz"}
https://dsp.stackexchange.com/questions/53970/how-to-interpret-allan-deviation-plot-for-gyroscope?noredirect=1
# How to interpret Allan Deviation plot for gyroscope? Regions with different slopes correspond to different types of noises in Allan Deviation plot of a steady gyroscope at steady temperature. My doubts are: 1. What is significance of x axis in this plot, how cluster/window size affect different noises? e.g. if a 7 degrees/Hr value on Y-axis corresponds to 10 second value of x axis (forget which noise region it is, i.e. antiquarian, ARW, Bias Stability, etc.) what can I say from that particular (x,y) combination on Allan deviation curve? 2. Bias Instability has two different units in two different sources (http://cache.freescale.com/files/sensors/doc/app_note/AN5087.pdf) dps/Hr and degrees/Hr. Which one is correct? • what's dps? I don't think there's "correct" or "incorrect" units, as long as they're stated and describe the same physical entity/quantity, you can convert them; just like you can state speed either as m/s, km/h, kn, or lengths of average Giraffe per fortnight. Dec 7 '18 at 9:29 • Please see my answer at this link where I explain details of the Allan Variance and its utility. dsp.stackexchange.com/questions/53123/… Dec 8 '18 at 12:29 The significance is a statistical measure of frequency error you would get if you averaged the frequency error over that duration of time, $$\tau$$, as compared to the average over a same duration of time, that much time prior. So it is a measure of the difference in error, and specifically the rms value of many of these measurements. This is useful for non-stationary processes such as frequency noise since this difference operation can remove drift and over short time intervals result in a converging stationary process. In your case, with a y-axis value of 7 degrees/Hr and a horizontal axis of 10 seconds, this indicates that the error from the gyroscope as compared to what it reported 10 seconds prior would be 7 degrees/Hr rms. (it is actually confusing to use "Y-axis" as they refer to the X Y and Z axis of the gyroscope itself). This measurement is done specifically by taking consistent measurements at some update rate (could be 1/second or 10x / second) etc but this would dictate the left most value on the horizontal axis and this measurement represents the average value over that duration, then for each value $$\tau$$ (the averaging interval) on the horizontal axis, these measurements are averaged, the averages are differenced over the duration $$\tau$$, and then from these differences an rms value is computed (how many differences are used dictates a confidence interval in the result). The computation can be done by an overlap approach, resulting in many more individual results for the rms computation from the same duration of time and therefore a tighter confidence. Typically for ADEV in the clock world (where I currently work), the y-axis is given as fractional frequency error and the horizontal axis is averaging time in seconds. A white frequency noise process would result in the ADEV going down at the rate of $$1/\sqrt{\tau}$$ which means as we continue to average over longer durations the rms error goes down at the inverse of the square root of the averaging time. You see this specifically in the example plot they give for the gyroscope in that the error is indeed going down at $$1/\sqrt{\tau}$$: for 2 decades on the x-axis, the error as indicated by the y-axis drops 1 decade. This is the tell-tale sign of a white-FM process, meaning over these time durations the frequency noise is a stationary white noise process. For frequency sources, as we average longer and longer we eventually become affected by non-white noise sources as we approach DC in the frequency domain ($$1/f$$ noise, then $$1/f^2$$, etc) resulting in the ADEV plot bottoming out and then starting to increase as averaging time is further increased (from frequency drift), indicating where we would actually begin to have a worst estimate if we averaged longer. You see this is their plot starting around 10 seconds and bottoming out around 1 minute. To further help in understanding ADEV, below shows the calculation for ADEV at one averaging time $$\tau$$. Typically the ADEV is computed over a range of different $$\tau$$ values, and this result is plotted a an rms error versus $$\tau$$. Also there is the Allan Variance (AVAR) which is simply the square of the Allan Deviation. The result is a statistical estimate based on the rms over many of the blocks as diagrammed above. A variant of the computation is done by overlapping the blocks as diagrammed below. This is really a minor detail in that both approaches converge to the same answer, given enough samples. This second approach results in more differences to rms in the same amount of time and therefore would have a tighter confidence interval. For this reason, this is the approach I always take when computing ADEV. For a further example of the use and utility of the ADEV plot, see this answer: What determines the accuracy of the phase result in a DFT bin? • Those figures are really nice! Are they yours? If so, can I have permission to use them in a presentation? :) Oct 16 '20 at 12:46
2021-10-26 20:16:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7578313946723938, "perplexity": 807.3222312995507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00422.warc.gz"}
https://mathoverflow.net/questions/155099/is-every-t0-2nd-countable-space-the-quotient-of-a-separable-metric-space
# Is every T0 2nd countable space the quotient of a separable metric space? Suppose the space $X$ has a countable basis and $X$ is $T_{0}$. Must there exist a separable metrizable space $Y$ and a quotient map q:$Y \rightarrow X$? (Some surrounding facts: Every metrizable space is 2nd countable iff it's separable. Every 2nd countable space is 1st countable and hence Frechet and hence sequential and hence the quotient of a locally compact metrizable space. ( However in the canonical proof, $Y$ is the disjoint union of a typically very large collection of convergent sequences [Franklin] and usually not separable, even if $X$ itself is a separable metric space). If $X$ is $T_{0}$ and regular and 2nd countable then $X$ is metrizable (Urysohn metrization)). For a non $T_{0}$ counterexample let $X$ have cardinality larger than the real numbers and employ the indiscrete topology.) If the answer is no' can a counterexample $X$ be $T_1$ or even $T_2$? (Edit: the answer is yes' and Francois Dorais and Andrej Bauer provide two explicit solutions below and also point out relevant references. The $T_{1}$ case was settled by Paul Strong. As shown below similar tactics settle the $T_{0}$ case. The question is relevant to topological domain theory. For example The similarity between our definitions and results and those of Schroder was first observed by Andrej Bauer, who proved that the sequential spaces with admissible representations are exactly the T0 (quotients of countably based) spaces...''. From the paper Topological and Limit-space Subcategories of Countably Based Equilogical Spaces.' by Menni and Simpson. homepages.inf.ed.ac.uk/als/Research/Sources/subcats.pdf • Have a look at the results leading to the proof of Theorem 3.8 in my paper with Carl Mummert arxiv.org/abs/0907.4126 One of the underlying goals of that paper was to keep the size of bases in check, though the statements of our results don't always make that explicit. I think our methods give a positive answer in the case of $T_1$ spaces. I might be able to check it out sometime in the next two weeks but you might get around to it faster. – François G. Dorais Jan 20 '14 at 0:09 • Francois, was hoping you'd see the question! Thanks! – Paul Fabel Jan 20 '14 at 1:15 • The relevance of the question is a yes' answer would characterize T_0 qcb spaces (quotients of 2nd countable spaces), as quotients of separable metric spaces, since sequential spaces are precisely the quotients of a metrizable spaces, and since 2nd countable spaces are sequential, as noted in the question. Basic properties of qcb spaces are not obvious: mathematik.tu-darmstadt.de/~streicher/GrSt.pdf. – Paul Fabel Jan 20 '14 at 2:08 • homepages.inf.ed.ac.uk/als/Research/Sources/tcctd.pdf also offers various characterization of qcb spaces. – Paul Fabel Jan 20 '14 at 2:14 • But it is well known by those who studied T0 qcb spaces that they are quotients of subspaces of the Baire spaces. Have you looked at Alex Simpson and Mathias Menni paper on largest common subccc of equilogical spaces and topological spaces? – Andrej Bauer Feb 2 '14 at 21:45 Every second-countable $T_0$ space is a quotient of a separable metric space and this essentially follows from the proof of the $T_1$ case given by Paul Strong in Quotient and pseudo-open images of separable metric spaces [Proc. Amer. Math. Soc. 33 (1972), 582-586]. A continuous map $f:X \to Y$ is sequence covering if for every convergent sequence $y_n \to y$ in $Y$ there is a convergent sequence $x_n \to x$ in $X$ such that $y_n = f(x_n)$ for all $n$ and $y = f(x)$. (Since the spaces under consideration are not necessarily Hausdorff, when I say "convergent sequence" I always mean a convergent sequence together with a choice of limit.) Fact 1. If $f:X \to Y$ is a sequence covering continuous map and $Y$ is sequential then $f:X \to Y$ is a quotient map. To prove this, we need to show that a map $g:Y \to Z$ is continuous if (and only if) $g \circ f:X \to Z$ is continuous. Since $Y$ is sequential, it is enough to show that if $y_n \to y$ is a convergent sequence in $Y$, then $g(y_n) \to g(y)$ is a convergent sequence in $Z$. Since $f:X \to Y$ is sequence covering, we can find a convergent sequence $x_n \to x$ in $X$ that maps to $y_n \to y$. Assuming only that $g \circ f:X \to Z$ is continuous, it follows that $g(f(x_n)) = g(y_n) \to g(f(x)) = g(y)$ is indeed a convergent sequence in $Z$. Let $\omega+1 = \{0,1,\dots,\omega\}$ be the one-point compactification of $\omega$. The space $Y^{\omega+1}$ with the compact-open topology consists of all convergent sequences in $Y$ (along with a choice of limit). Since $\omega+1$ is compact Hausdorff, the evaluation map $e:(\omega+1)\times Y^{\omega+1}\to Y$ is continuous. Moreover, this map is obviously sequence covering. Even better: Fact 2. If $g:X \to Y^{\omega+1}$ is a continuous surjection then the continuous map $f:(\omega+1)\times X \to Y$ defined by $f(n,x) = e(n,g(x)) = g(x)(n)$ is sequence covering. Combining these facts, we see that if $Y^{\omega+1}$ is sequential and the continuous image of a separable metric space $X$ then $Y$ is a quotient of the separable metric space $(\omega+1)\times X$. Now if $Y$ is $T_0$ and second-countable then so is the space $Y^{\omega+1}$. Therefore, it suffices to prove: Fact 3. Every second-countable $T_0$ space $Y$ is the continuous image of a separable metric space. Tho see this, fix a countable base $(U_n)_{n\lt\omega}$ for $Y$. Let $X \subseteq \omega^\omega$ consist of all $x:\omega\to\omega$ such that $(U_{x(n)})_{n \lt\omega}$ enumerates a neighborhood basis for some point of $Y$. Since $Y$ is $T_0$, a neighborhood basis for a point uniquely determines that point. Therefore, we obtain a natural surjection $f:X \to Y$, which is easily seen to be continuous. Since a second-countable space is always sequential, we can combine the three facts above to conclude that every second-countable $T_0$ space is a quotient of a separable metric space. Actually, as pointed out in Andrej's answer the map in Fact 3 is already a quotient map (since it is an open mapping). Strong needed the more complicated construction since his assumptions are in terms of countable networks instead of countable bases. Here is an elementary construction of such a quotient. Let $X$ be a $T_0$-space and $B_0, B_1, B_2, \ldots$ a countable base for $X$. For any point $x \in X$ define $N(x) = \{i \in \mathbb{N} \mid x \in B_i\}$, the index set of basic neighborhoods of $x$. Given a sequence $\alpha : \mathbb{N} \to \mathbb{N}$ let $i(\alpha) = \{\alpha(k) \mid k \in \mathbb{N}\}$, the image of the sequence. The Baire space $\mathbb{N}^\mathbb{N}$ is countably based and $0$-dimensional with the ultrametric $$d(\alpha, \beta) = 2^{-\min_k (\alpha_k \neq \beta_k)}.$$ Let $D$ be the subspace $$D = \{\alpha \in \mathbb{N}^\mathbb{N} \mid \exists x \in X . i(\alpha) = N(x)\},$$ which consists of those sequences that enumerate the index set of some point in $X$. Note that the point $x$ in the definition of $D$ is unique for a given $\alpha$, if it exists, because $X$ is $T_0$. Define the map $q : D \to X$ by $$q(\alpha) = \text{"the x such that i(\alpha) = N(x)"}.$$ It is a basic exercise in topology to verify that $q$ is a quotient map. Thus, every countably based $T_0$-space is the quotient of a $0$-dimensional countably based ultrametric space. Also note that the map $N : X \to \mathcal{P}(\mathbb{N})$ is an embedding when the codomain is equipped with the Scott topology. • Fantastic answer! – Paul Fabel Feb 2 '14 at 20:14 The following canonical construction yields a `Yes' answer if $X$ is countable. Each $x \in X$ is assigned a metric space (or spaces) as follows. Let $U_{1},U_{2},...$ be a countable basis for $X$. For each $x \in X$ let $S_{x}$ denote the set of indices $n$ such that $x \in U_{n}$. Let $V(x,n)$ denote the intersection of the first $n$ basic open sets in the collection determined by $S_{x}$. Note $V(x,1)$,$V(x,2),...$ is a nested local basis at $x$ whose intersection is the closure of $x$. Case 1. Suppose the sequence $V(x,1)$,$V(x,2),...$ is not eventually constant Let $W(x,n)=V(x,n) \setminus V(x,n+1)$ with the discrete topology and distance $2^{-n}$ between distinct points. Notice we may extend the metric to $W_{x}$, (the union of the sets $W(x,n)$), so that all nonconstant convergent sequences converge to a single new point $x_{x}$ in the completion. Let $Y_{x}$ denote the completion of $W_{x}$ and observe there is a natural map $Y_{x} \rightarrow X$. Case 2. Not case 1 means the closure of $x$ is isolated. For each $y$ such that $x \rightarrow y$ build a compact metric space $Y(x,y)$ consisting of a nontrivial convergent sequence. Note there is a natural surjective map $Y(x,y) \rightarrow \{x,y\}$. Perform this construction even if Case 1. applies. Now let $Y$ be the disjoint union of all metric spaces constructed above. By construction we have natural surjective map $Y \rightarrow X$. To see this is a quotient, fix a nonclosed $A \subset X$. Since $X$ is a sequential space obtain a convergent sequence $a_{n} \in A$ with a limit $x \notin A$. If for some fixed $n$ the constant sequence $a_{n} \rightarrow x$ then Case 2 shows the preimage of A is not closed. Otherwise Case 1 shows the premage of $A$ is not closed. Thus $Y \rightarrow X$ is a quotient map. If $X$ is countable then by construction $Y$ is countable and hence separable.
2020-05-26 13:03:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9808475375175476, "perplexity": 164.96562847117855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390758.21/warc/CC-MAIN-20200526112939-20200526142939-00013.warc.gz"}
https://www.rexygen.com/doc/ENGLISH/MANUALS/BRef/SC2FA.html
### SC2FA – State controller for 2nd order system with frequency autotuner Block SymbolLicensing group: AUTOTUNING Function Description The SC2FA block implements a state controller for 2nd order system (7.4) with frequency autotuner. It is well suited especially for control (active damping) of lightly damped systems ($\xi <0.1$). But it can be used as an autotuning controller for arbitrary system which can be described with sufficient precision by the transfer function $F\left(s\right)=\frac{{b}_{1}s+{b}_{0}}{{s}^{2}+2\xi \Omega s+{\Omega }^{2}},$ (7.4) where $\Omega >0$ is the natural (undamped) frequency, $\xi$, $0<\xi <1$, is the damping coefficient and ${b}_{1}$, ${b}_{0}$ are arbitrary real numbers. The block has two operating modes: "Identification and design mode" and "Controller mode". The "Identification and design mode" is activated by the binary input $\mathtt{\text{ID}}=\mathtt{\text{on}}$. Two points of frequency response with given phase delay are measured during the identification experiment. Based on these two points a model of the controlled system is built. The experiment itself is initiated by the rising edge of the RUN input. A harmonic signal with amplitude uamp, frequency $\omega$ and bias ubias then appears at the output mv. The frequency runs through the interval $⟨\mathtt{\text{wb}},\mathtt{\text{wf}}⟩$, it increases gradually. The current frequency is copied to the output w. The rate at which the frequency changes (sweeping) is determined by the cp parameter, which defines the relative shrinking of the initial period ${T}_{b}=\frac{2\pi }{\mathtt{\text{wb}}}$ of the exciting sine wave in time ${T}_{b}$, thus ${c}_{p}=\frac{\mathtt{\text{wb}}}{\omega \left({T}_{b}\right)}=\frac{\mathtt{\text{wb}}}{\mathtt{\text{wb}}{e}^{\gamma {T}_{b}}}={e}^{-\gamma {T}_{b}}.$ The cp parameter usually lies within the interval $\mathtt{\text{cp}}\in ⟨0,95;1\right)$. The lower the damping coefficient $\xi$ of the controlled system is, the closer to one the cp parameter must be. At the beginning of the identification period the exciting signal has a frequency of $\omega =\mathtt{\text{wb}}$. After a period of stime seconds the estimation of current frequency response point starts. Its real and imaginary parts are available at the xre and xim outputs. If the MANF parameter is set to 0, then the frequency sweeping is stopped two times during the identification period. This happens when points with phase delay of ph1 and ph2 are reached for the first time. The breaks are stime seconds long. Default phase delay values are $-6{0}^{\circ }$ and $-12{0}^{\circ }$, respectively, but these can be changed to arbitrary values within the interval $\left(-36{0}^{\circ },{0}^{\circ }\right)$, where $\mathtt{\text{ph1}}>\mathtt{\text{ph2}}$. At the end of each break an arithmetic average is computed from the last iavg frequency point estimates. Thus we get two points of frequency response which are successively used to compute the controlled process model in the form of (7.4). If the MANF parameter is set to 1, then the selection of two frequency response points is manual. To select the frequency, set the input $\mathtt{\text{HLD}}=\mathtt{\text{on}}$, which stops the frequency sweeping. The identification experiment continues after returning the input HLD to 0. The remaining functionality is unchanged. It is possible to terminate the identification experiment prematurely in case of necessity by the input $\mathtt{\text{BRK}}=\mathtt{\text{on}}$. If the two points of frequency response are already identified at that moment, the controller parameters are designed in a standard way. Otherwise the controller design cannot be performed and the identification error is indicated by the output signal $\mathtt{\text{IDE}}=\mathtt{\text{on}}$. The IDBSY output is set to 1 during the "identification and design" phase. It is set back to 0 after the identification experiment finishes. A successful controller design is indicated by the output $\mathtt{\text{IDE}}=\mathtt{\text{off}}$. During the identification experiment the output iIDE displays the individual phases of the identification: $\mathtt{\text{iIDE}}=-1$ means approaching the first point, $\mathtt{\text{iIDE}}=1$ means the break at the first point, $\mathtt{\text{iIDE}}=-2$ means approaching the second point, $\mathtt{\text{iIDE}}=2$ means the break at the second point and $\mathtt{\text{iIDE}}=-3$ means the last phase after leaving the second frequency response point. An error during the identification phase is indicated by the output $\mathtt{\text{IDE}}=\mathtt{\text{on}}$ and the output iIDE provides more information about the error. The computed state controller parameters are taken over by the control algorithm as soon as the SETC input is set to 1 (i.e. immediately if SETC is constantly set to on). The identified model and controller parameters can be obtained from the p1, p2, …, p6 outputs after setting the ips input to the appropriate value. After a successful identification it is possible to generate the frequency response of the controlled system model, which is initiated by a rising edge at the MFR input. The frequency response can be read from the w, xre and xim outputs, which allows easy confrontation of the model and the measured data. The "Controller mode" (binary input $\mathtt{\text{ID}}=\mathtt{\text{off}}$) has manual ($\mathtt{\text{MAN}}=\mathtt{\text{on}}$) and automatic ($\mathtt{\text{MAN}}=\mathtt{\text{off}}$) submodes. After a cold start of the block with the input $\mathtt{\text{ID}}=\mathtt{\text{off}}$ it is assumed that the block parameters mb0, mb1, ma0 and ma1 reflect formerly identified coefficients ${b}_{0}$, ${b}_{1}$, ${a}_{0}$ and ${a}_{1}$ of the controlled system transfer function and the state controller design is performed automatically. Moreover if the controller is in the automatic mode and $\mathtt{\text{SETC}}=\mathtt{\text{on}}$, then the control law uses the parameters from the very beginning. In this way the identification phase can be skipped when starting the block repeatedly. The diagram above is a simplified inner structure of the frequency autotuning part of the controller. The diagram below shows the state feedback, observer and integrator anti-wind-up. The diagram does not show the fact, that the controller design block automatically adjusts the observer and state feedback parameters $f1,\dots ,f5$ after identification experiment (and $\mathtt{\text{SETC}}=\mathtt{\text{on}}$). The controlled system is assumed in the form of (7.4). Another forms of this transfer function are $F\left(s\right)=\frac{\left({b}_{1}s+{b}_{0}\right)}{{s}^{2}+{a}_{1}s+{a}_{0}}$ (7.5) and $F\left(s\right)=\frac{{K}_{0}{\Omega }^{2}\left(\tau s+1\right)}{{s}^{2}+2\xi \Omega s+{\Omega }^{2}}.$ (7.6) The coefficients of these transfer functions can be found at the outputs p1,...,p6 after the identification experiment ($\mathtt{\text{IDBSY}}=\mathtt{\text{off}}$). The output signals meaning is switched when a change occurs at the ips input. Inputs dv Feedforward control variable Double (F64) sp Setpoint variable Double (F64) pv Process variable Double (F64) tv Tracking variable Double (F64) hv Manual value Double (F64) MAN Manual or automatic mode Bool off .. Automatic mode on ... Manual mode ID Identification or controller operating mode Bool off .. Controller mode on ... Identification and design mode TUNE Start the tuning experiment (off$\to$on), the exciting harmonic signal is generated Bool HLD Stop frequency sweeping Bool BRK Termination signal Bool SETC Flag for accepting the new controller parameters and updating the control law Bool off .. Parameters are only computed on ... Parameters are accepted as soon as computed off$\to$on  One-shot confirmation of the computed parameters ips Switch for changing the meaning of the output signals Long (I32) 0 .... Two points of frequency response p1 … frequency of the 1st measured point in rad/s p2 … real part of the 1st point p3 … imaginary part of the 1st point p4 … frequency of the 2nd measured point in rad/s p5 … real part of the 2nd point p6 … imaginary part of the 2nd point 1 .... Second order model in the form (7.5) p1 … ${b}_{1}$ parameter p2 … ${b}_{0}$ parameter p3 … ${a}_{1}$ parameter p4 … ${a}_{0}$ parameter 2 .... Second order model in the form (7.6) p1 … ${K}_{0}$ parameter p2 … $\tau$ parameter p3 … $\Omega$ parameter in rad/s p4 … $\xi$ parameter p5 … $\Omega$ parameter in Hz p6 … resonance frequency in Hz 3 .... State feedback parameters p1 … ${f}_{1}$ parameter p2 … ${f}_{2}$ parameter p3 … ${f}_{3}$ parameter p4 … ${f}_{4}$ parameter p5 … ${f}_{5}$ parameter MFR Generation of the parametric model frequency response at the w, xre and xim outputs (off$\to$on triggers the generator) Bool Outputs mv Manipulated variable (controller output) Double (F64) de Deviation error Double (F64) SAT Saturation flag Bool off .. The controller implements a linear control law on ... The controller output is saturated IDBSY Identification running Bool off .. Identification not running on ... Identification in progress w Frequency response point estimate - frequency in rad/s Double (F64) xre Frequency response point estimate - real part Double (F64) xim Frequency response point estimate - imaginary part Double (F64) epv Reconstructed pv signal Double (F64) IDE Identification error indicator Bool off .. Successful identification experiment on ... Identification error occurred iIDE Error code Long (I32) 101 .. Sampling period too low 102 .. Error identifying one or both frequency response point(s) 103 .. Manipulated variable saturation occurred during the identification experiment 104 .. Invalid process model p1..p6 Results of identification and design phase Double (F64) Parameters ubias Static component of the exciting harmonic signal Double (F64) uamp Amplitude of the exciting harmonic signal  $\odot$1.0 Double (F64) wb Frequency interval lower limit [rad/s]  $\odot$1.0 Double (F64) wf Frequency interval upper limit [rad/s]  $\odot$10.0 Double (F64) isweep Frequency sweeping mode  $\odot$1 Long (I32) 1 .... Logarithmic 2 .... Linear (not implemented yet) cp Sweeping rate  $↓$0.5 $↑$1.0 $\odot$0.995 Double (F64) iavg Number of values for averaging  $\odot$10 Long (I32) alpha Relative positioning of the observer poles (in identification phase)  $\odot$2.0 Double (F64) xi Observer damping coefficient (in identification phase)  $\odot$0.707 Double (F64) MANF Manual frequency response points selection Bool off .. Disabled on ... Enabled ph1 Phase delay of the 1st point in degrees  $\odot$-60.0 Double (F64) ph2 Phase delay of the 2nd point in degrees  $\odot$-120.0 Double (F64) stime Settling period [s]  $\odot$10.0 Double (F64) ralpha Relative positioning of the observer poles  $\odot$4.0 Double (F64) rxi Observer damping coefficient  $\odot$0.707 Double (F64) acl1 Relative positioning of the 1st closed-loop poles couple  $\odot$1.0 Double (F64) xicl1 Damping of the 1st closed-loop poles couple  $\odot$0.707 Double (F64) INTGF Integrator flag  $\odot$on Bool off .. State-space model without integrator on ... Integrator included in the state-space model apcl Relative position of the real pole  $\odot$1.0 Double (F64) DISF Disturbance flag Bool off .. State space model without disturbance model on ... Disturbance model is included in the state space model dom Disturbance model natural frequency  $\odot$1.0 Double (F64) dxi Disturbance model damping coefficient Double (F64) acl2 Relative positioning of the 2nd closed-loop poles couple  $\odot$2.0 Double (F64) xicl2 Damping of the 2nd closed-loop poles couple  $\odot$0.707 Double (F64) tt Tracking time constant  $\odot$1.0 Double (F64) hilim Upper limit of the controller output  $\odot$1.0 Double (F64) lolim Lower limit of the controller output  $\odot$-1.0 Double (F64) mb1p Controlled system transfer function coefficient ${b}_{1}$ Double (F64) mb0p Controlled system transfer function coefficient ${b}_{0}$  $\odot$1.0 Double (F64) ma1p Controlled system transfer function coefficient ${a}_{1}$  $\odot$0.2 Double (F64) ma0p Controlled system transfer function coefficient ${a}_{0}$  $\odot$1.0 Double (F64) 2020 © REX Controls s.r.o., www.rexygen.com
2021-01-20 12:10:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 93, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7018544673919678, "perplexity": 3067.447645744723}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703520883.15/warc/CC-MAIN-20210120120242-20210120150242-00347.warc.gz"}
https://www.clutchprep.com/physics/practice-problems/139990/calculate-the-work-w-done-by-the-gas-during-process-1-2-6-5-1-one-important-use-
Work & PV Diagrams Video Lessons Concept # Problem: Calculate the work W done by the gas during process 1→2→6→5→1.One important use for pV diagrams is in calculating work. The product pV has the units of Pa×m3=(N/m2)⋅m3=N⋅m=J; in fact, the absolute value of the work done by the gas (or on the gas) during any process equals the area under the graph corresponding to that process on the pV diagram. If the gas increases in volume, it does positive work; if the volume decreases, the gas does negative work (or, in other words, work is being done on the gas). If the volume does not change, the work done is zero. ###### FREE Expert Solution Work done: $\overline{){\mathbf{W}}{\mathbf{=}}{\mathbf{P}}{\mathbf{·}}{\mathbf{∆}}{\mathbf{V}}}$ P12 = 3P0 ΔV12 = Vf - V0 = V2 - V1 = 3V0 - V0 = 2V0 W12 = P12•ΔV12 = 3P0•2V0 = 6P0V0 ΔV26 = Vf - V0 = V6 - V2 = 3V0 - 3V0 = 0 W26 = P26•ΔV26 = 3P0•0 = 0 86% (15 ratings) ###### Problem Details Calculate the work W done by the gas during process 1→2→6→5→1. One important use for pV diagrams is in calculating work. The product pV has the units of Pa×m3=(N/m2)⋅m3=N⋅m=J; in fact, the absolute value of the work done by the gas (or on the gas) during any process equals the area under the graph corresponding to that process on the pV diagram. If the gas increases in volume, it does positive work; if the volume decreases, the gas does negative work (or, in other words, work is being done on the gas). If the volume does not change, the work done is zero.
2021-09-18 08:44:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6457027792930603, "perplexity": 667.8549476419721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056348.59/warc/CC-MAIN-20210918062845-20210918092845-00372.warc.gz"}
https://dsp.stackexchange.com/tags/impulse-response/hot
Tag Info Accepted Calculation of Reverberation Time (RT60) from the Impulse Response Encouraged by Hilmar, I've decided to update the answer with all the steps necessary to calculate the Reverberation Time from a scratch. Presumably, it will be useful for others interested in this ... • 10.5k Accepted How do real-time convolution plugins process audio so quickly Real-time low-latency partitioned convolution reverb with a long impulse response works by dividing the impulse response into unequally sized partitions. The shortest partitions (blocks) are at the ... • 12.5k Accepted How do impulse response guitar amp simulators work? When talking about modeling, there are two things that usually get modeled: 1. the guitar amp, and 2. the speaker cabinet. Only the latter is modeled by an impulse response, which means that the ... • 80.4k Accepted This is how my professor is finding the frequency response of an LTI system when given the impulse response. Is this wrong? Your professor is right, and you're almost right too. The filter is clearly an FIR filter, but because its frequency response can be expressed as a geometric series, a recursive implementation is ... • 80.4k Accepted Why do linear phase filters have symmetric impulse responses? Actually, I think I see why. $$X(j\Omega) = |X(j\Omega)|e^{-j\theta(\Omega)}$$ $|X(j\Omega)|$ is purely real, and therefore if we take the IFT it is even and symmetric. $\theta(\Omega)= a\Omega$ ... • 303 Accepted • 32.6k Meaning of arrow head in dirac delta? On the wiki page for the Dirac delta function, you can find one meaning of the arrow. It somehow means that is not "defined" as a constant defined value, but more as a factor applied to ... • 30.2k
2022-08-14 03:17:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6889596581459045, "perplexity": 1434.0622793092564}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571993.68/warc/CC-MAIN-20220814022847-20220814052847-00127.warc.gz"}
http://stats.stackexchange.com/questions/22242/weighted-kernel-density-plot-in-r
# Weighted kernel density plot in R I've subsetted and plotted my unweighted data, but I don't see how to make use of my weighting variable "FactBx$expwgt" ##################################################### plot( density(FactB1$BAV_DIST), #### xlim=range( c(FactB1$BAV_DIST, FactB2$BAV_DIST, FactB3$BAV_DIST) ), xlim = c(0, 10), main = "Density Plots", xlab = "miles", col = 2 ) lines(density(FactB2$BAV_DIST), col=3) lines(density(FactB3$BAV_DIST), col=4) lines(density(FactB6$BAV_DIST), col=5) TextVect <- c("Walk","Bicycle","Auto{Dropped}","Auto{Parked}") ColorVect <- c(2,3,4,5) LineVect <- c(0,1,2,3) mtext(TextVect, side = 1, line = LineVect, col = ColorVect) ########################################################### I've been using the survey package to do weighted summaries, but I don't see how to do weighted density plots. - The density function has a "weights = " parameter; would this be what you want? –  jbowman Feb 3 '12 at 23:07
2013-12-09 08:58:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6679335236549377, "perplexity": 5161.138208676485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163949658/warc/CC-MAIN-20131204133229-00061-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/the-stern-gerlach-experiment.681826/
# The Stern-Gerlach experiment ## Main Question or Discussion Point "There are 47 electrons surrounding the silver atom nucleus, of which 46 form a closed inner core of total angular momentum zero – there is no orbital angular momentum, and the electrons with opposite spins pair off, so the total angular momentum is zero, and hence there is no magnetic moment due to the core. The one remaining electron also has zero orbital angular momentum" Can anyone tell me why the remaining electron has zero orbital angular momentum? Thanks! ## Answers and Replies Related Quantum Physics News on Phys.org It's in the ground state, so that it has zero orbital angular momentum. "There are 47 electrons surrounding the silver atom nucleus, of which 46 form a closed inner core of total angular momentum zero – there is no orbital angular momentum, and the electrons with opposite spins pair off, so the total angular momentum is zero, and hence there is no magnetic moment due to the core. The one remaining electron also has zero orbital angular momentum" Can anyone tell me why the remaining electron has zero orbital angular momentum? Thanks! If you google "silver electron configuration", you'll get the answer that it is Kr 4d10 5s1. So, "Kr" tells you that silver electron configuration is like krypton's plus additional 11 electrons. Ten of them fill up 4d shell completely which does not have any angular momentum because there are 5 electrons in each spin orientation and sum of the orbital angular momentum of all electrons in d shell cancels out exactly. All that is left is 11th electron which does not have orbital electron because it is in s shell, so total angular momentum of silver atom is equal to spin of the 5s1 electron.
2020-07-03 17:58:40
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8840689659118652, "perplexity": 489.3467880349388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882634.5/warc/CC-MAIN-20200703153451-20200703183451-00265.warc.gz"}
https://www.tutorialspoint.com/how-to-find-all-adverbs-and-their-positions-in-a-text-using-python-regular-expression
# How to find all adverbs and their positions in a text using python regular expression? PythonServer Side ProgrammingProgramming As per Python documentation If one wants more information about all matches of a pattern than the matched text, finditer() is useful as it provides match objects instead of strings. If one was a writer who wanted to find all of the adverbs and their positions in some text, he or she would use finditer() in the following manner − >>> text = "He was carefully disguised but captured quickly by police." >>> for m in re.finditer(r"\w+ly", text): ...     print('%02d-%02d: %s' % (m.start(), m.end(), m.group(0))) 07-16: carefully 40-47: quickly Published on 11-Jan-2018 13:23:49
2020-08-13 11:50:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23308315873146057, "perplexity": 6715.584945944011}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738982.70/warc/CC-MAIN-20200813103121-20200813133121-00272.warc.gz"}
https://openpnm.org/examples/applications/porosity.html
Predicting Porosity# The example explains how to estimate porosity of a network. We also discuss some challenges in estimating the porosity of the network and how to reduce the estimation error. import numpy as np import openpnm as op import porespy as ps import matplotlib.pyplot as plt %config InlineBackend.figure_formats = ['svg'] np.random.seed(10) %matplotlib inline np.set_printoptions(precision=5) --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) Cell In [1], line 3 1 import numpy as np 2 import openpnm as op ----> 3 import porespy as ps 4 import matplotlib.pyplot as plt 5 get_ipython().run_line_magic('config', "InlineBackend.figure_formats = ['svg']") ModuleNotFoundError: No module named 'porespy' Create a random cubic network# pn = op.network.Cubic(shape=[15, 15, 15], spacing=1e-6) pn.regenerate_models() Calculate void and bulk volume# Vol_void = np.sum(pn['pore.volume'])+np.sum(pn['throat.volume']) inlet = pn.pores('left') outlet = pn.pores('right') A = op.topotools.get_domain_area(pn, inlets=inlet, outlets=outlet) L = op.topotools.get_domain_length(pn, inlets=inlet, outlets=outlet) Vol_bulk = A * L Poro = Vol_void / Vol_bulk print(f'The value of Porosity is: {Poro:.2f}') ------------------------------------------------------------ - WARNING: Attempting to estimate inlet area...will be low - SOURCE : openpnm.topotools._topotools.get_domain_area - TIME : 2022-09-01 15:36:47,485 ------------------------------------------------------------ ------------------------------------------------------------ - WARNING: Attempting to estimate domain length...could be low if boundary pores were not added - SOURCE : openpnm.topotools._topotools.get_domain_length - TIME : 2022-09-01 15:36:47,499 ------------------------------------------------------------ The value of Porosity is: 0.21 Domain volume:# One of the issues in estimation of porosity of the network is to estimate the domain volume correctly. In a cubic network for example, finding the length at x direction using Nx * spacing is erroneous. The reason is the domain length in x direction includes additional lengths from half of pore diameter for pores that locate on the left and right side. This is shown in figure below: left) the green plane is located at the pore centers of left boundary pores and some pore volume on the left is ignored. Applying a similar plane on other sides of the network to find the length in each direction, the resulting bulk volume could be underestimated. right) the green plane is located at the far left side. Applying a similar plane on other sides of the network to find the length in each direction, the resulting bulk volume could be overestimated. Overlapping pores and throats:# Another issue is to ensure that the pore-scale models for volume are consistent. For example there is no overlapping pores, because in this case the void volume calculation will overestimate the real void space. Depending on the pore scale model used for the shape of pores and throats, they may need special methods to calculate the void volume to account for the overlap between throats and pores. Depending on the method that was used for assigning throat lengths, this overlap volume may be included in volume calculations. Existing methods to correct the throat volumes are lens and pendular_ring methods available in geometry models for throat_volume. For example assuming a spheres and cylinders geometry, lens model in geometry collection tackles this problem. The spheres_and_cylinders geometry collection includes throat.total_volume and throat.volume. throat.volume is the volume of throat with corrections using lens volume: Throat volume (throat.volume) = volume of cylinder (throat.total volume) - the overlap of cylinder with spherical pores at its two ends (difference) Let’s create a spheres and cylinders geometry: pn.add_model_collection(op.models.collections.geometry.spheres_and_cylinders) pn.regenerate_models() Vol_void_initial = np.sum(pn['pore.volume'])+np.sum(pn['throat.total_volume']) Vol_void_corrected = np.sum(pn['pore.volume'])+np.sum(pn['throat.volume']) Poro_initial = Vol_void_initial / Vol_bulk Poro_corrected = Vol_void_corrected / Vol_bulk print(f'Initial Porosity: {Poro_initial:.5f}') print(f'Corrected Porosity: {Poro_corrected:.5f}') Initial Porosity: 0.12852 Corrected Porosity: 0.12590 Although in this example the lens volume was negligible, depending on the size of the pores and throats, this value can be too high to be neglected. .. Note:: Pendular_ring method calculates the volume of the pendular rings residing between the end of a cylindrical throat and spherical pores that are in contact but not overlapping. This volume should be added to the throat volume if the throat length was found as the center-to-center distance less the pore radii. Void volume:# In extracted networks, different geometrical shapes can be assigned to the pores and throats and their volume can be calculated using existing geometry models. However, the original segmented pore regions are not regular shapes. Therefore, the total pore and throat volumes in the network don’t add up to the known void volume, since the pores and throats don’t fill space perfectly. np.random.seed(10) im = ps.generators.overlapping_spheres(shape=[100, 100, 100], r=10, porosity=0.5, maxiter=0) plt.imshow(im[:, :, 50]); im_poro = ps.metrics.porosity(im) print(f"Porosity from image: {im_poro*100:.1f}%") Porosity from image: 63.3% snow = ps.networks.snow2(im, boundary_width = 0) network = snow.network pn = op.io.network_from_porespy(network) pn['pore.diameter']=network['pore.inscribed_diameter'] pn['throat.diameter']=network['throat.inscribed_diameter'] model=op.models.geometry.throat_length.cubes_and_cuboids model=model, regen_mode='normal') model=op.models.geometry.pore_volume.cube model=model, regen_mode='normal') model=op.models.geometry.throat_volume.cuboid model=model, regen_mode='normal') pn.regenerate_models() Vol_void = np.sum(pn['pore.volume'])+np.sum(pn['throat.volume']) Vol_bulk = 100**3 # from the image pnm_poro = Vol_void / Vol_bulk print(f"Porosity from pnm: {pnm_poro*100:.1f}%") Porosity from pnm: 67.5% .. Notes:: 1) Note that in the example above, we assumed the original image was available for calculating the bulk volume. Assuming the original image is not available, calculating the bulk volume for the extracted network is another source of approximation error. Existing methods in topotools such as get_domain_length and get_domain_area tackle this problem to provide a better approximation of bulk volume of the network. 2) Note that in the example above, we assumed cubic pores and cuboid throats for the network. Assigning other diameters, shapes, and volume functions would result in a different estimation of void volume.
2022-09-28 21:43:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5559789538383484, "perplexity": 3635.743052999649}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00504.warc.gz"}
https://mathstat.slu.edu/escher/index.php?title=Square_Block_Pattern_Exploration&printable=yes
# Square Block Pattern Exploration Objective: Create wallpaper patterns using repeated square blocks. Graph paper ## Exploration This exploration uses Escher's line designs[1]: 1 2 3 4 which could be produced from a single stamp, rotated into four positions. 1. Draw the pattern corresponding to  $\begin{matrix}2 & 4 \\ 2 & 4 \end{matrix}$. Use at least four copies of the basic 2x2 shape. 2. 3. Draw the pattern corresponding to  $\begin{matrix}4 & 3 \\ 2 & 1 \end{matrix}$ 4. What numerical pattern would generate this design: 5. Can you design a pattern that has reflection symmetry using these blocks? Escher designed four square blocks which, together with their mirror images, make a large number of patterns that look like overlapping ribbons. Experiment with the EscherTiles applet to see these blocks in action. Now consider a square block which looks like one line crossing over another. There are only two choices for rotating this block: 1 2 1. How many two-by-two arrays are there using only 1's and 2's 2. Find all possible patterns for these cross blocks, using only two-by-two arrays. Handin: A sheet with answers to all questions. 1. Visions of Symmetry Page 44-52.
2020-10-24 09:33:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37806275486946106, "perplexity": 2480.850095827761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107882103.34/warc/CC-MAIN-20201024080855-20201024110855-00005.warc.gz"}
https://www.lesswrong.com/posts/sC6dyvk7tNHfunBBC/in-the-pareto-optimised-crowd-be-sure-to-know-your-place
# 4 tldr: In a population playing independent two-player games, Pareto-optimal outcomes are only possible if there is an agreed universal scale of value relating each players' utility, and the players then acts to maximise the scaled sum of all utilities. In a previous post, I showed that if you are about the play a bargaining game with someone when the game's rules are initially unknown, then the best plan is not to settle on a standard result like the Nash Bargaining Solution or the Kalai-Smorodinsky Bargaining Solution (see this post). Rather, it is to decide in advance how much your respective utilities are worth relative to each other, and then maximise their sum. Specifically, if you both have (representatives of) utility functions u1 and u2, then you must pick a θ>0 and maximise u1+θu2 (with certain extra measures to break ties). This result also applies if the players are to play a series of known independent games in sequence. But how does this extend to more than two players? Consider the case where there are three players (named imaginatively 1, 2 and 3), and that they are going to pair off in each of the possible pairs (12, 23 and 31) and each play a game. The utility gains from each game are presumed to be independent. Then each of the pairs will choose factors θ12, θ23 and θ31, and seek to maximise u112u2, u223u3 and u331u1 respectively. Note here that I am neglecting tie-breaking and such; the formal definitions needed will be given in the proof section. A very interesting situation comes up when θ12θ23θ31=1. In that case, there is an universal scale of "worth" for each of the utilities: it's as if the three utilities are pounds, dollars and euros. Once you know the exchange rate from pounds to dollars (θ12), and from dollars to euros (θ23), then you know the exchange rate from euros to pounds (θ31=1/(θ12θ23)). We'll call these situations transitive. Ideally we'd want the outcomes to be Pareto-optimal for the three utilities. Then the major result is: The outcome utilities are Pareto-optimal if and only if the θ are transitive. What if there are not three, but hundreds, or thousands of players, playing games with each other? If we assume that the utilities derived from each game is independent, then we get a similar result: given a sequence of players 1, 2,..., n such that each plays a game with the next player and n and 1 also play a game, then we still get: The outcome utilities are Pareto-optimal if and only if the θ are transitive (ie θ12θ23...θn-1nθn1=1). What this means is the Pareto-optimal outcomes are only possible if there is a universal (linear) scale of value relating all the utilities of any player linked to another by direct or indirect trade links. The only important factor is the weight of your utility in the crowd. This result should be easy to extend to games with more than two players, but that's beyond the scope of this post. The rest of this post is a proof of the result, and may be skipped by those not of a technical bent. Proof: Apologies if this result is known or trivial; I haven't seen if before myself. Here, we will always assume that the set possible outcomes is convex (mixed solutions are always possible) and that the Pareto-optimal outcome set is smooth (no corners) and contains no straight line segments. What these conditions mean is that if O is the set of Pareto-optimal outcomes, then each point in O has a well defined tangent and normal vector. And further, the slope of the tangent never stays constant as we move around O. Because O is the upper-right boundary of a convex set, this means that each point in O is uniquely determined by the slope of its tangent vector - equivalently, by the slope of its normal vector. The normal vector is particularly useful, for the point in O that maximises the utility u1+θu2 is the point that has normal vector (1, θ). This means that each point in O is uniquely determined by the value of θ. This is illustrated in the following diagram. Here θ=2/3, the purple set is the set of possible outcomes, the blue lines are the sets of constant x+(2/3)y, and the red normal vector (1, 2/3) is drawn from the maximising outcome point (1, 2.5): Now assume we have θ12, θ23 and θ31 as above. The utility outcomes for the three games (given by maximising u112u2, u223u3 and u331u1) will be designated o12=(x1,y2), o23=(x2,y3) and o31=(x3,y1), with the index corresponding to the players. We now consider a change to the utilities by adding the vectors v12=(x12,y12), v23 and v31 to each of these outcomes. We want the changes to be Pareto-optimal, and to remain within the set of possible outcomes for each game. The first condition can be phrased as requiring that the changes to each utility be positive; i.e. that x12+y31 (the change to player 1's utility) x23+y12 and x31+y23 all be positive. The second condition requires that o12+v12 is not to the right of the natural tangent line through o12. Since the normal to this tangent line is (1,θ12), this is equivalent with requiring that the dot product of v12 and (1,θ12) be negative, hence that x1212y12, x2323y23 and x3131y31 all be negative. Then using all six inequalities above, we can see that: x12 ≤ -θ12(y12) ≤ θ12(x23) ≤ -θ12θ23(y23) ≤ θ12θ23(x31) ≤ -θ12θ23θ31(y31) ≤ θ12θ23θ31(x12) Now if the change is actually going to be a Pareto-improvement, one of the inequalities will have to be a strict inequality, giving x12 < θ12θ23θ31(x12). This is obviously impossible when θ12θ23θ31 = 1, so in that circumstance, there cannot be any Pareto-improvements. This demonstrates half of the implication. Now let T be the sixth root of θ12θ23θ31. If θ12θ23θ31 > 1 (equivalent with T > 1), then the following vectors (strictly) obey all the above inequalities: v12=(θ12θ23θ31 ε, -Tθ23θ31 ε), v23=(T2θ23θ31 ε, -T3θ31 ε), v31=(T4θ31ε, -T5ε). Since these obey the inequalities strictly and the outcome sets are smooth, for sufficiently small ε, these provide a Pareto optimal improvement. If θ12θ23θ31 < 1 (equivalent with T < 1), then the following vectors work in the same way: v12=(-θ12θ23θ31 ε, Tθ23θ31 ε), v23=(-T2θ23θ31 ε, T3θ31 ε), v31=(-T4θ31ε, T5ε). This proof can be extended to n players in a similar fashion. When the outcome sets contains straight line segments, we need a tie breaking system for cases when the "utility indifference lines" are parallel to these segments; apart from that, the result goes through. If the outcome sets contain corners, it remains true that there are no Pareto-optimal improvements when θ12θ23θ31=1. But it is possible to have θ12θ23θ31≠1 and be Pareto-optimal as long as the outcomes are on a corner. However, in these circumstances, there will always be values θ'12, θ'23 and θ'31, giving the same outcomes as θ12, θ23 and θ31 and with θ'12θ'23θ'31=1. This can be seen by replacing the corner with a series of limiting smooth curves.
2019-10-18 06:15:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8329944610595703, "perplexity": 858.0255895419253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677964.40/warc/CC-MAIN-20191018055014-20191018082514-00321.warc.gz"}
https://www.betterment.com/resources/retirement/401ks-and-iras/find-five-minutes-in-the-next-five-days/
You know who you are! The tax return remains defiantly unfinished. The IRA remains unfunded. And with the passing of days your dread grows larger and your shame grows deeper. “April 15th is almost here!” you wail. “What if I don’t max out my IRA in time?… I’m just so busy!” Swat that monkey off your back. Image Source: National Geographic We hear you! Doing your taxes stinks. Plus you have far better – or more pressing – things to do. We know this. But… spend a second with the graph below. That’s the difference between maxing out your IRA before April 15 (and doing it every year), and not doing it at all. Returns are calculated based on an annual rate of 8% and a yearly contribution of $5,000. That could be half a million of your money, not going to work. So swat that monkey off your back. Find five minutes in the next ten days. Do your taxes. And fund your IRA, or open a new one. And if you’re feeling super organized, you can max it out for 2013 early, and increase returns by$705.00.  You can also rollover your 401(k) to an IRA in just a few minutes. Promise!
2017-01-23 12:44:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3573916554450989, "perplexity": 5592.7261260906225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00189-ip-10-171-10-70.ec2.internal.warc.gz"}
https://brilliant.org/problems/recursive-digit-sum/
# Recursive Digit Sum Given a positive integer $$n$$, let $$S(n)$$ denote the digit sum of $$n$$. Consider the sequence of numbers given by $\begin{cases} n_1 = S(n) \\ n_k = S(n_{k-1} ) & k \geq 2 \\ \end{cases}$ For how many positive integers $$n \le 2013$$ does the sequence $$\{ n_k \}$$ contain the number $$9$$? ×
2017-05-28 22:25:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9560379385948181, "perplexity": 201.37075761571703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463611569.86/warc/CC-MAIN-20170528220125-20170529000125-00424.warc.gz"}
https://almodarresi.com/caring-quotes-paihqif/porosity-of-soil-0fdee6
[Middle English porosite, from Old French, from Medieval Latin porōsitās, from porōsus, porous; see porous.] Loose, porous soils have lower bulk densities and greater porosities than tightly packed soils. Porosity change is a common characteristic of natural soils in fluid-solid interaction problems, which can lead to an obvious change of the soil-water retention curve (SWRC). Homogeneous samples usually have greater porosity values than mixtures. It is greater in clayey and organic soils than in sandy soils. parameters in assessing anthropogenic change in soil. Two of the … Why use Soil Moisture Monitoring? Determine the soil bulk density, the porosity, the volumetric water content and the; Samples of the aquifer material were collected during drilling of various monitoring wells in the area. SOIL POROSITY The first article inthis series describe the relationship between mineral and organic material, air and water. Therefore, the composition of a soil has a significant effect on the amount of water it can hold. Therefore, porosity of the soil is 43.40%. The porosity of soil is the volume of space in between the mineral particles of soil. Soil porosity is defined by the number of pores present within the soil. Soil porosity is the open space, or voids, within the soil profile that is free for air and water. Porosity in earth sciences and construction. Memorize the above equation, be able to rearrange it to find a solution for any variable. A large number of small particles in a volume of soil produces a large number of soil pores. Porosity varies depending on particle size and aggregation. In a material like gravel the grains are large and there is lots of empty space between them since they don’t fit together very well. Healthy soils usually have more number of pores between and within soil aggregates, whereas poor quality soils have few pores or cracks. A soil or rock material may have high porosity, but unless the voids are connected, the fluid cannot move from one pore to another. Porosity measures how much space is between rocks. This attribute is commonly measured in regards to soil, since appropriate porosity levels are necessary for plants to grow. Most of the methods developed have been designed for small samples. % pore space = 100 (1-1.5/2.65) = 43.40 . Porosity is the space between the soil particles, while aeration is the process of allowing air to circulate through the soil. I am going to share with you the methods that I personally use and that have worked best for ME. Not everyone will agree, but I am just sharing what I have seen to be successful in my plants. It is not just the total amount of pore space that is important, but the size distribution of the pores, and the continuity between them which determines function and behaviour of soil. Aeration also facilitates the growth of beneficial microorganisms in the soil. To envision porosity, look at the picture on the bottom left. Compaction affects infiltration, rooting depth, water holding capacity, soil porosity, the availability of plant nutrients, and the activity of soil microbes. SOIL MOISTURE . Life processes in plants take place in water, which, in actively growing herbaceous plants often accounts for 80% to 90% of the total green plant weight. The grain size of soil particles and the aggregate structures they form affect the ability of a soil to transport and retain water, air, and nutrients. The influence of porosity on soil water retention phenomena is investigated by a theoretical model and an experimental test in this study. ADVERTISEMENTS: Particle density — 2.65 gm./c.c. 24. Porosity definition, the state or quality of being porous. In soil: Grain size and porosity. Porosity is the pore space in soil between mineral particles (and solid organic matter) filled with either air or water. The porosity of soil is determined by texture, structure and compaction. Porosity and permeability are both properties of rocks and soil. The main difference between porosity and permeability is that porosity is a measurement of space between rocks whereas permeability is a measurement of how easy it is for fluids to flow between rocks.. What is Porosity. Porosity Soil porosity is the percentage of a soil that is pore space or voids. The size and types of the sediments in the soil will determine the amount of porosity. Dry soil weight — 87.8 gm. The pore space both contains and controls most of the functions of soil. A structure or part that is porous. Pore space also gets created when below-ground liquids releases gas and as fertilized materials work their way into the soil. % $$\eta$$ = 32%. The state or property of being porous. Porosity is the value used to describe how much empty, or void, space is present in a given sample. Calculate the bulk density of a soil sample that has a porosity of 45%. Determination of porosity from a wireline log is only part of the problem, because the values … Porosity is a complex measurement which is taken out after taking various samples from the scene. Lets discuss one of these soil properties: porosity. Pore space in soil, also known as soil porosity, describes the amount of negative space between soil particles. Porosity: Laboratory Porosity Measurement. Compacted soils have high bulk density. i.e. If both bulk density and particle density are known, the total porosity can be calculated using these values. Porosity varies with: Texture. These opens spaces occur between grains of sand, silt, clay and organic matter in the soil. Porous soils allow voids in the soil, creating aeration, which is important for root health. Porosity = V air + V water /V total = 140cm 3 /250cm 3 = 56% . Pores, or pore spaces, get created when plant roots, insects, and earth worms move through the soil. 9. Soil porosity (n) is the ratio of the volume of voids to the total volume of the soil: n = (V_v) / V. Where V_v is the volume of the voids (empty or filled with fluid), and V is the total volume of the soil. Porosity varies greatly from one kind of soil to another because the grains of soil are loosely or densely packed. Porosity is the measurement of void spaces between rocks, whereas permeability is the measurement which tells how easily fluid can flow in between rocks. SOIL MOISTURE, POROSITY, AND BULK DENSITY. Sands have larger pores, but less total pore space than clays. Soil porosity is influenced by soil texture and structure. 3. Example: What is the porosity of a soil which has a $$\rho_{\text{b}}=1.80 \frac{\text{g}}{\text{cm}^3}$$ and a $$\rho_{\text{s}}=2.65 \frac{\text{g}}{\text{cm}^3}$$? See more. In a soil or rock the porosity (empty space) exists between the grains of minerals. Porosity is an intrinsic property of every material. ties 1. for 1cm 3 soil, assume r d of 2.65 g/cm 3 1cm 3-.45cm 3 = .55 x 2.65g/cm 3 = 1.46g/cm 3 . Other users of gridded data The 30-arcsec gridded data are also available as a three-dimensional array of 8-bit unsigned binary integers, where each integer value represents the porosity multiplied by 100. Improving bulk density, alleviating compaction, and increasing soil porosity can be achieved through several easy-to-do management practices. The space between soil particles is referred to as "voids" or "pores". Porosity is usually used in parallel with soil void ratio (e), which is defined as the ratio of the volume of voids to the volume of solidsl. Grain size is classified as clay if the particle diameter is less than 0.002 mm (0.0008 inch), as… Read More The porosity of soil is determined by the movement of air and water within the soil. The determination of porosity is paramount because it determines the ultimate volume of a rock type that can contain hydrocarbons. 131 a readily measurable soil property and soil porosity (Schon, 2004). Soil structure. Yes, I am sure some brands of soil have more nutrients than others. The ratio of the volume of all the pores in a material to the volume of the whole. It refers to the amount of empty space within a given material. Calculate the porosity of the soil. Soil density and porosity are the two most important. We can imagine a soil mass with its constituents (i.e. INFO table defining the 11 standard soil layers for which the mean porosity was computed. RELEVANCE TO AGRICULTURE: determines how quickly water infiltrates, how much water is retained for plant use, and how easily roots develop in the soil. A soil’s porosity and pore size distribution characterize its pore space, that portion of the soil’s volume that is not occupied by or iso-lated by solid material. 8. Porosity is the volume of those pores relative to the total space, and it's a good measure of how much water the ground can hold. The value and distribution of porosity, along with permeability and saturation, are the parameters that dictate reservoir development and production plans. Main Difference – Porosity vs. Permeability. Objectives: To understand the factors affecting soil moisture To learn the terminology associated with soil moisture, bulk density and porosity. Porosity, which is denoted as small letter n, of a soil sample is defined as the ratio of the volume of voids that soil sample contains to the total volume of the sample, and is expressed as percentage. The air and water in the soil existed in the pore space in the soil, thus an understanding of soil porosity is a further step in understanding good turf management. clay-rich soils have greater porosity than sandy soils. 6 134 Experimental Program 135 The experiments of this study were aimed to determine the effects of fines content, sand fabric, 136 salt concentration and porosity on the electrical resistivity of sands as discussed in the following 137 paragraphs. Porosity is the gaps or voids within a material. Used in geology, hydrogeology, soil science, and building science, the porosity of a porous medium (such as rock or sediment) describes the fraction of void space in the material, where the void may contain, for example, air or water.It is defined by the ratio: =. The measurements were: Cylinder volume — 73.6 c.c. This is the area where water can reside in the soil. 2. Compacted soil with restricted root depth. A great many methods have been developed for determining porosity, mainly of consolidated rocks having intergranular porosity (encountered in oil reservoir). 132 133 . The average soil has a porosity of about 50%, and the pores are filled with air or water depending on the moisture content. Soil porosity affects drainage, allowing soils to hold water for plant consumption, and draining excess water. ADVERTISEMENTS: Problem: A soil core was taken for the determination of bulk density. Porosity and Permeability are the terms related to rocks and soils as both are the measurement regarding them. , are the measurement regarding them have seen to be successful in my plants are for... Materials work their way into the soil ) exists between the soil ultimate volume soil! Porosity and permeability are the parameters that dictate reservoir development and production plans '' or pores... Porosity are the measurement regarding them open space, or pore spaces, created. After taking various samples from the scene can imagine a soil has a significant effect on the amount of space. Soil to another because the grains of sand, silt, clay and organic soils in! The mineral particles of soil are loosely or densely packed: porosity porosity. Development and production plans alleviating compaction, and draining excess water core was taken for the determination of porosity soil... To grow test in this study a volume of soil have more number of to... Both properties of rocks and soil 73.6 c.c or void, space is present in a given sample quality have... Varies greatly from one kind of soil are loosely or densely packed 2.65g/cm =. And as fertilized materials work their way into the soil will determine the amount of water it hold... Exists between the soil for determining porosity, describes the amount of empty space a! The value and distribution of porosity, look at the picture on bottom! Densely packed also facilitates the growth porosity of soil beneficial microorganisms in the soil and as fertilized materials work way! Soil aggregates, whereas poor quality soils have few pores or cracks both bulk density alleviating! Contains and controls most of the whole on the bottom left to rocks and soils as are... Roots, insects, and increasing soil porosity is defined by the movement of air water. Which is taken out after taking various samples from the scene relationship between and... Soil density and porosity are the measurement regarding them [ Middle English porosite, from,. Both bulk density through the soil will determine the amount of negative space between the soil profile that free. Paramount because it determines the ultimate volume of soil first article inthis series describe the relationship between and! Describes the amount of porosity, describes the amount of empty space within a material to the volume all... Known as soil porosity affects drainage, allowing soils to hold water for plant consumption and. Soil aggregates, whereas poor quality porosity of soil have lower bulk densities and greater porosities than tightly soils... Standard soil layers for which the mean porosity was computed the first article series... Necessary for plants to grow which is important for root health the were. But less total pore space in between the soil info table defining the 11 soil. Roots, insects, and draining excess water soils usually have greater porosity than... Old French, from porōsus, porous soils have lower bulk densities and greater than. Along with permeability and saturation, are the measurement regarding them, silt, porosity of soil and matter. The first article inthis series describe the relationship between mineral and organic matter in the soil densities and greater than... Saturation, are the measurement regarding them best for ME am going to share with the! Facilitates the growth of beneficial microorganisms in the soil will determine the amount of negative space between particles... Rock the porosity of soil produces a large number of pores between and within soil,... Microorganisms in the soil for any variable spaces, get created when liquids! The two most important between the grains of sand, silt, clay and organic than! Distribution of porosity functions of soil pores 43.40 % share with you the methods developed have been for. = 140cm 3 /250cm 3 = 56 % is 43.40 % material, air and.... Medieval Latin porōsitās, from Medieval Latin porōsitās, from porōsus, porous soils have lower densities!, alleviating compaction porosity of soil and increasing soil porosity is the space between the soil is %... Soil texture and structure profile that is free for air and water been developed for determining porosity along! In between the grains of sand, silt, clay and organic matter in the will! Space also gets created when plant roots, insects, and draining excess water pores or.. ; see porous. organic matter in the soil particles but less total pore space also created! Intergranular porosity ( empty space within a material types of the functions soil... Amount of porosity, describes the amount porosity of soil negative space between soil particles their way into the soil taken the. Measurement regarding them the scene to circulate through the soil particles, aeration! 140Cm 3 /250cm 3 = 1.46g/cm 3 designed for small samples alleviating compaction, earth. And production plans % pore space than clays calculated using these values of 45 % some of... Densities and greater porosities than tightly packed soils opens spaces occur between grains of sand, silt clay... Improving bulk density can be achieved through several easy-to-do management practices rocks having intergranular porosity ( in! Properties: porosity determine the amount of water it can hold of space in soil, creating aeration which... The relationship between mineral and organic material, air and water porosity of soil between. Properties: porosity releases gas and as fertilized materials work their way into the.! Voids in the soil, creating aeration, which is important for root health porosity and permeability the! Determine the amount of empty space ) exists between the soil since appropriate porosity levels necessary...
2021-04-14 11:08:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5905194878578186, "perplexity": 2879.127181897015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077810.20/warc/CC-MAIN-20210414095300-20210414125300-00019.warc.gz"}
http://tex.stackexchange.com/questions/22225/determine-height-and-depth-of-letters-relative-to-the-font-size
# Determine height and depth of letters relative to the font size I would like to scale graphical elements (like images, `tikz` and `tikz-timing` diagrams) relative to the font size, so that they have the same height as an normal uppercase letter (i.e. `X` or `M`; I noticed they have about the same height, but the tip of `A` is slightly higher). I also sometimes like to do this with the normal letter depth (e.g. the depth of `y` or `g`). I know that besides the possibility to use the `ex` or `em` units for font size relative length (1.6ex =~ height of `X`), the current font size is stored inside `\f@size` as string length with the `pt` stripped. So for normal 10pt font it contains `10`. There is also `\ht\strutbox` and `\dp\strutbox` which are `.7\baselineskip` and `.3\baselineskip`, respectively, which in turn is about 1.2x the font size. However, a `\rule{1pt}{10pt}` is significant higher than a 10pt `X`. This is not that surprising, because `\ht\strutbox` (which is anyway supposed to by higher than `X`) is 10pt x 1.2 x 0.7 = 8.4pt in size. Question: How is the actual letter height and depth calculated if the font size is known? Is this always a constant factor? Is this font dependent? I would like to avoid to have to box an `X` and measure its size, but this would be plan B. - go with plan b. the relative height/depth of letters in a font is dependent on the font designer's concept of how the font should look -- or should be. when fonts were still metal, there were some fonts for which the manufacturer changed the design (usually by shortening the descenders) to fit into "normal" or "standard" type dimensions. – barbara beeton Jul 3 '11 at 19:43 Thanks @barbara, any suggestions about which letter I should use? `X` or `M` seem to be good candidates for the height, but I'm not sure for the depth: `y` or `g` maybe? I could just box the whole alphabet but this might lead to a worse result. Like I said, the `A` is a little bit higher and taking the absolute maximum and minimum would most likely not look good. – Martin Scharrer Jul 3 '11 at 19:54 Just to get an example, I measured 1ex for font `pplr8t` (Palatino), which gives 4.68994pt, while a lowercase x is 4.84497pt high. With `ec-qplr` (TeX Gyre Pagella) the same measurements give 4.48999pt and 4.41998pt respectively. I'm with barbara: go with plan B. For the height a B can be a good choice (I wouldn't use M that suffers the same problem as A); for the depth maybe a q that hasn't fancy curves at the bottom. – egreg Jul 3 '11 at 19:55 @egreg: Thanks. Just one another little question: does TeX get the length of 1ex and 1em from the font or are they only dependent on the official font size? – Martin Scharrer Jul 3 '11 at 20:02 @Martin -- as has been pointed out by others, the x-height may or may not equal 1ex from the tfm metrics. but you wanted the cap height. pack the cap X in a box and measure. a cap A or any rounded letter (O, C, etc.) will usually be just a tad higher as an optical correction. if you want the max height of the font, you can usually get it from the (measured) height of a parenthesis if the font has been tuned to be suitable for math. similarly for the max depth -- however, beware of text-only fonts: parentheses (and slashes) may encompass only ordinary uppercase. it's up to the designer. – barbara beeton Jul 4 '11 at 17:49 The "ex height" is a length that has a weak relation to the height of an "x"; for example in `cmr10` the two are equal, while in TeX Gyre Pagella they don't: an "x" is 4.42pt high an 1ex is 4.845pt. It's generally impossible to predict the height of the uppercase letters based on the "type size", as barbara beeton underlines in her comment. One should also remember that TeX know only the "bounding box" of the characters, which often protrude from it: for example the upper vertex of an A can overshoot a bit. But for the problem of adapting a built symbol to the general shape of a font this overshoot is not important. The value of 1ex is font dependent and resides in "fontdimen 5" and is directly available as `\fontdimen5\font`: ``````\dimen0=\fontdimen5\font \dimen0=1ex `````` are equivalent assignments (which are immediately translated in points). With e-TeX it's not necessary to box a character in order to measure it: ``````\dimen0=\fontcharht\font`B \dimen2=\fontchardp\font`q `````` which in LaTeX-speak are ``````\setlength{\dimen0}{\fontcharht\font`B} \setlength{\dimen2}{\fontchardp\font`q} `````` assign to `\dimen0` and `\dimen2` the height of an uppercase B and the depth of a lowercase q in the current font. There are also `\fontcharwd` and `\fontcharic` for retrieving the width and the italic correction. Again, these lengths are those of the bounding box, not necessarily of the character itself. - Thanks! Very good to know. I might use something like this. – Martin Scharrer Jul 3 '11 at 21:06 Just be sure to do the measurements when the current font is the one you want; `\font` can actually be replaced by the symbolic name of any font such as `\OT1/cmr/m/n/10` (which must be built with `\csname...\endcsname`, of course. – egreg Jul 3 '11 at 21:09
2013-05-25 15:27:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8888352513313293, "perplexity": 1412.6655082291427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705957380/warc/CC-MAIN-20130516120557-00019-ip-10-60-113-184.ec2.internal.warc.gz"}
https://taoofmac.com/space
2020 In Review It’s pretty obvious what to begin with: We’re in the middle of a pandemic. You might have heard of it by now, and it’s got no real end in sight even though vaccines were approved in record time and are starting to roll out over most of next year. Then a bunch of other stuff happened, and it was all largely unimportant in comparison. It was a year where hardly anything turned out as expected (best laid plans and all that), and given the nearly year-long confinement and isolation there is a natural tendency to focus on the stuff I missed rather than go on about what I did–but I like to follow the same pattern I did and , so here goes: Work and Industry I have to say this was likely the best working year I’ve ever had at Microsoft: • most senior team • by far the best line managers I ever had • highest (internal) visibility • largest and most challenging customer projects • and, with Microsoft’s renewed focus on the telco industry, also by far the best use of my skills over . But, alas, it was simultaneously one of the worst. The Good Bits Leaving the Portuguese subsidiary (and the relatively small market, with all that comes with it) was the best decision I ever made, if only because I am again fully immersed in the kind of international/culturally diverse work culture that I missed since . And this might just be bias due to my current assignments, but after five years I can finally say I really like working at Microsoft (with a few caveats regarding tech ). I am (sort of) back in the telco industry (i.e., currently working almost exclusively with telco clients, although there is a fair bit of variety and I’m sure to end up in the financial sector as well), taking on the sort of groundbreaking mega-projects that were the sort of thing I loved doing at Vodafone, and bouncing between technology, strategy and business. I can’t really begin to explain the affordances in terms of scope and perspective this kind of work brings, but it hasn’t been all roses. Part of the worst was obviously the pandemic-driven shift to , and I am completely fed up with not being able to have a stable, regular, family-friendly schedule. But that was not all of it. I am now past my third re-org in this calendar year, a lot of work was grueling, uneven and extremely depressing (E_TOO_MUCH_EXCEL), and I spent most of the year juggling too many projects, teams, business issues, high-level outcomes, and, in general, far too many different contexts to feel like I was making significant headway into all of it at any given moment in time. It’s the firehose that keeps on giving, and I miss focusing on one thing during a single day. What I Learned For starters, I learned that I can keep working (albeit at great personal cost) amidst the whole mess, and that I can bring clarity to some pretty gnarly business discussions by drawing upon years of experience, a renewed amount of soft skills and… a very large, very far reaching (but metaphorical) stick comprised of a good understanding of both technological status quo and hands-on experience on all sorts of stuff. Core cloud concepts like Infrastructure automation, application modernization, and deep analytics stuff are still largely outside the realm of traditional IT in many places, and many companies are eager to catch up. So looking back, on both Kubernetes and machine learning were a bit more than prophetic, to say the least. I also gathered a lot of evidence that multi-tasking is fine and good, but completely overrated–when taken to the extreme, context-switching will grind you down. What’s Next The upshot of this all has been that I am currently torn between doing some pretty unique things and having great impact where I am at now, and wanting a smaller, nicer context that would allow me to get to know people better, do proper technical deep dives into actual engineering problems (IT architectures are boring, no matter how much modernization you throw at them) and build stuff. Above all, I’m still looking for a mission of sorts. I need to do work that I know matters instead of wading through paperwork and various procedural hoops that completely take away my creative focus. So even while the business strategist in me is quite enjoying the ride, the engineer in me is slowly dying and feeling inexorably drawn back into simpler, “smaller” (in scope if not in scale) things that don’t require me to forego family life, technical depth, and my general well-being. I’m now at the point where I need to code just to stay sane and reassure myself I’m not turning into a jockey, and that sometimes means spending even more hours in front of a computer. The Meta Challenge I like to do the weird, totally out there stuff that does not fit into any pattern anyone’s done before, and traditional IT is all about managing risks (which I seem to be good enough at, given my continued survival) and following tried-and-true approaches (which bore me to tears). But if there’s something I’ve learned over my nearly 30 years of work (if you include freelancing and college startups since the early 90s), is that personal satisfaction and innovation has never come from playing it safe. Home Office and Personal Gear My has been my “natural habitat” for a year now, and the timing couldn’t have been better. has been good for my health and motivation (so that is definitely staying), but I need to make things physically tidier and make more room for other pursuits. I haven’t upgraded my main setup yet (still using my battered old as a glorified Remote Desktop console, with an external monitor to each side) and the new has been more than good enough for work, but I’m planning to do some changes next year based on what I’ve learned over the past few months. I got myself a sizable chunk of music hardware I’ve written about (like , which I love), and there is more music-related stuff I haven’t published yet due to lack of time, and having that gear set up permanently is something I’m keen on doing. VR Escapism But if I had to highlight one thing I got this year that made a significant difference, that would have to be the . Yes, VR has always been gimmicky. And yes, every time I use it I am wary of it being tied to my account, and avoid using any of its social features. So I’ve hedged my bets on its ecosystem (and adjusted my expectations) precisely because of that, but there is a lot more to this than just playing Beat Saber and Superhot and stepping outside the real world for a few hours now and then. Having a standalone, fully independent device is where the game is (literally) at, and this is a field I am going to enjoy keeping track of during 2021 because I’m positive they need decent competition, and it’s not going to come from PC headsets1. The caveat, of course, is that the ecosystem seems to be somewhat stalled creativity-wise. There are more copycat shoot-em-ups and re-spins of older games (like Myst, which is a laudable exception because it is just so much better this way) than original ideas, but the potential is there, and I think VR (even if only for gaming) has a better chance of becoming relevant than, say, on the desktop2. In a nutshell, I think of it like this: a is, at least for me, a much better investment than a “regular” games console at this point. I just really wish they’d sorted out the family angle. Personal Pursuits With a and nowhere to travel to, a lot of my stuff has migrated to home infrastructure (ironic for one who works primarily in the cloud, I know). I now have mirrors of all my GitHub repositories in a Gitea instance on it, as well as my own drone CI/CD setup (for building ARM binaries and some ESP32 stuff) and a plethora of other services, most (including my IoT stuff) fronted by simple but effective dashboards3. All of it sort of gravitated “home” during the latter half of the year, and some of it was set up because I just wanted a to test out a specific feature, but then it also became about a mix of keeping my wanting to rely less on public services. It’s all containerized, all backed up to the cloud, and requires remarkably little maintenance. However, I’m not using Kubernetes at home, since I can’t really run it on the (yet) and am sticking to docker-compose and vanilla for nearly everything. Although I have been meaning to overhaul my setup for the whole year, it has also just kept ticking and sprouted tiny ESP32 cameras and other HomeKit-compatible goodies without any hassles. This Site I’ve been hammering away in the background to move to static hosting, because of late I’ve felt that would be the right way to make some things simpler. Given the current state of CDNs, gone are the days when I’d need fancy HTTP optimizations on the server side, so I’d rather focus on running APIs and spending my compute budget on k3s than keeping a VM up “just” to serve randomly-updated HTML. I love to write, and have tried to publish at least one thing of substance every week, but truth be told that there is no real reason to keep a CPU core spinning 24/7 to do so. Call it an exercise in extreme penny-pinching, if you will, or just an excuse to refactor pretty much bullet-proof code, but the current blog/wiki engine is now able to render directly to Azure Storage, partly because that’s what I had handy, and partly because I keep pushing and asyncio to the limit. The entire site can still run on a without issues, and I’m testing static generation and upload on the same board, so I’m also having some fun in the process. And no, I haven’t had the time to rewrite the whole thing in yet, but I’m sorely tempted to do so. The production site is still deployed via piku, and the back-end has been moving around a few different cloud providers (we’re back on Google Cloud this month, since I wanted to clean up my Azure and AWS accounts), but I expect to go fully static in a few weeks, depending on whether I just use drone, move everything to the k3s cluster where I’m keeping a few public services or do something a little bit more creative. Health and Work-Life Balance None of us has gotten COVID (yet, that we’re aware), but the lack of actual exercise has taken a toll, especially considering that before January I was out of the house at least half of the day and walked everywhere. I’ve been feeling tired, anxious and depressed more often than not, and I blame it squarely on overwork rather than confinement. The improved things a fair bit, but one of the things I’m trying to figure out is how to do more exercise–and it’s not so much about gear (there’s a hardly used elliptical trainer in my office) but about having the time, by which I mean regular hours. Since I routinely get meetings scheduled on top of private events and family time and hardly push back, that is likely to have to change next year. Another thing I will strive to find time for are my hobbies. I have kept (literally) investing in music gear and software, but, like reading, music was one activity I just didn’t have much heart for this year. I did manage to put some things up on SoundCloud, but nothing too fancy yet. Like exercise, music requires time and motivation, and neither have been easy to come by this year. 1. Like me, there are a gazillion people out there who just don’t have the money, time or patience to run a gaming PC, although I must confess I’ve been much further from building a Ryzen workstation that could double as a gaming rig. ↩︎ 2. Yes, I know, it’s a low bar. But it’s pretty funny to consider that VR enthusiasts and (full) Linux desktop users are quite likely to be roughly in the same order of magnitude… ↩︎ 3. I really wish the Node-RED charting was better. Seriously, it’s fine, and I love its flexibility, ecosystem and ease of integration with anything, but rendering data could be, oh, so much better. I’m spoiled by , I guess. ↩︎ The Surface Book 3 After , I’ve finally formed an opinion of the 15” Surface Book 3, and it is largely positive given that it has become my main work machine (but not, as yet, my main development one, since I still prefer to develop on ). But even having put it away for the holiday break, I thought it timely to post my notes on it today. 300 days later It’s been enough time since to get a decent feeling of how things were progressing, and it’s clearly not going well. Numbers are now high enough that a few of my close friends have already been infected (or have close relatives who were). Although my family is OK, statistics is encroaching–just in time for the holiday break. Regardless of the name of this blog and my current employer, I’m still a UNIX guy first and foremost, and as such I can’t help but ponder the implications of this week’s little IBM/RedHat drama. And rather than just link to it with a short quip, I think a little more is in order.
2021-01-16 00:45:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.279048353433609, "perplexity": 1505.6790811232786}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703497681.4/warc/CC-MAIN-20210115224908-20210116014908-00400.warc.gz"}
http://physics.stackexchange.com/questions/43509/force-in-tetrahedron-edges
# Force in tetrahedron edges I am looking for a formula that enables me to calculate the force in a tetrahedron edge such that it relates $F_b$ with $F_z$ through the beam thickness and length. I have the following assumptions: • The beams are circular and hollow. So the have a radius $r$ and a thickness $t$. • The static force points in the negative $z$-direction and is active in the c.o.g. • The contribution of the vertices/joints is neglected. • The tetrahedron is standing on one vertex as shown in the image below. • The beam length is $a$. I have drawn the situation in the following image: Can someone provide me with a formula or some pointers on how to find the beam force? Edit, using Jaime's hints. As the c.o.g. is alined with he lower vertex the structure is in equilibrium. When looking at the lower vertex the vertical component of the three beam forces equals the gravitational force. $$F_{b_{y1}}+F_{b_{y2}}+F_{b_{y3}}=F_{g}\\ F_{b_{y1}} = F_{b_{y2}} = F_{b_{y3}}\\ F_{b_{y}} = \frac{m \cdot g}{3}$$ From the Encyclopedia Polyhedra written by Robert Gray I found that the half-cone angle is 35$^{o}$. This means that: $$F_{b} = F_{b_{y}} \cdot cos 35\\ F_{b} = \frac{m \cdot g}{3} \cdot cos 35$$ - So now consider one of the top vertices, looking at it from above. You have the force you just calculated pushing it out and up. Consider only the force in the horizontal plane, which will again require some trig. That force is balanced by the two equal unknown forces coming from the top bars, which should be easy with some more trig. – Jaime Nov 6 '12 at 17:05
2016-07-24 16:37:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7932770252227783, "perplexity": 489.18961780703404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824113.35/warc/CC-MAIN-20160723071024-00151-ip-10-185-27-174.ec2.internal.warc.gz"}
https://mxnet.apache.org/versions/1.9.1/api/python/docs/tutorials/packages/gluon/blocks/custom_layer_beginners.html
# Customer Layers (Beginners)¶ While Gluon API for Apache MxNet comes with a decent number of pre-defined layers, at some point one may find that a new layer is needed. Adding a new layer in Gluon API is straightforward, yet there are a few things that one needs to keep in mind. In this article, I will cover how to create a new layer from scratch, how to use it, what are possible pitfalls and how to avoid them. ## The simplest custom layer¶ To create a new layer in Gluon API, one must create a class that inherits from Block class. This class provides the most basic functionality, and all pre-defined layers inherit from it directly or via other subclasses. Because each layer in Apache MxNet inherits from Block words “layer” and “block” are used interchangeable inside of the Apache MxNet community. The only instance method needed to be implemented is forward(self, x) which defines what exactly your layer is going to do during forward propagation. Notice, that it doesn’t require to provide what the block should do during back propogation. Back propogation pass for blocks is done by Apache MxNet for you. In the example below, we define a new layer and implement forward() method to normalize input data by fitting it into a range of [0, 1]. # Do some initial imports used throughout this tutorial from __future__ import print_function import mxnet as mx from mxnet import nd, gluon, autograd from mxnet.gluon.nn import Dense mx.random.seed(1) # Set seed for reproducable results class NormalizationLayer(gluon.Block): def __init__(self): super(NormalizationLayer, self).__init__() def forward(self, x): return (x - nd.min(x)) / (nd.max(x) - nd.min(x)) The rest of methods of the Block class are already implemented, and majority of them are used to work with parameters of a block. There is one very special method named hybridize(), though, which I am going to cover before moving to a more complex example of a custom layer. ## Hybridization and the difference between Block and HybridBlock¶ Looking into implementation of existing layers, one may find that more often a block inherits from a HybridBlock, instead of directly inheriting from Block. The reason for that is that HybridBlock allows to write custom layers that can be used in imperative programming as well as in symbolic programming. It is convenient to support both ways, because the imperative programming eases the debugging of the code and the symbolic one provides faster execution speed. You can learn more about the difference between symbolic vs. imperative programming from this deep learning programming paradigm article. Hybridization is a process that Apache MxNet uses to create a symbolic graph of a forward computation. This allows to increase computation performance by optimizing the computational symbolic graph. Once the symbolic graph is created, Apache MxNet caches and reuses it for subsequent computations. To simplify support of both imperative and symbolic programming, Apache MxNet introduce the HybridBlock class. Compare to the Block class, HybridBlock already has its forward() method implemented, but it defines a hybrid_forward() method that needs to be implemented. The main difference between forward() and hybrid_forward() is an F argument. This argument sometimes is refered as a backend in the Apache MxNet community. Depending on if hybridization has been done or not, F can refer either to mxnet.ndarray API or mxnet.symbol API. The former is used for imperative programming, and the latter for symbolic programming. To support hybridization, it is important to use only methods available directly from F parameter.Usually, there are equivalent methods in both APIs, but sometimes there are mismatches or small variations. For example, by default, subtraction and division of NDArrays support broadcasting, while in Symbol API broadcasting is supported in a separate operators. Knowing this, we can can rewrite our example layer, using HybridBlock: class NormalizationHybridLayer(gluon.HybridBlock): def __init__(self): super(NormalizationHybridLayer, self).__init__() def hybrid_forward(self, F, x): Thanks to inheriting from HybridBlock, one can easily do forward pass on a given ndarray, either on CPU or GPU: layer = NormalizationHybridLayer() layer(nd.array([1, 2, 3], ctx=mx.cpu())) [0. 0.5 1. ] <NDArray 3 @cpu(0)> As a rule of thumb, one should always implement custom layers by inheriting from HybridBlock. This allows to have more flexibility, and doesn’t affect execution speed once hybridization is done. Unfortunately, at the moment of writing this tutorial, NLP related layers such as RNN, GRU, and LSTM are directly inhereting from the Block class via common _RNNLayer class. That means that networks with such layers cannot be hybridized. But this might change in the future, so stay tuned. It is important to notice that hybridization has nothing to do with computation on GPU. One can train both hybridized and non-hybridized networks on both CPU and GPU, though hybridized networks would work faster. Though, it is hard to say in advance how much faster it is going to be. ## Adding a custom layer to a network¶ While it is possible, custom layers are rarely used separately. Most often they are used with predefined layers to create a neural network. Output of one layer is used as an input of another layer. Depending on which class you used as a base one, you can use either Sequential or HybridSequential container to form a sequential neural network. By adding layers one by one, one adds dependencies of one layer’s input from another layer’s output. It is worth noting, that both Sequential and HybridSequential containers inherit from Block and HybridBlock respectively. Below is an example of how to create a simple neural network with a custom layer. In this example, NormalizationHybridLayer gets as an input the output from Dense(5) layer and pass its output as an input to Dense(1) layer. net = gluon.nn.HybridSequential() # Define a Neural Network as a sequence of hybrid blocks net.initialize(mx.init.Xavier(magnitude=2.24)) # Initialize parameters of all layers net.hybridize() # Create, optimize and cache computational graph input = nd.random_uniform(low=-10, high=10, shape=(5, 2)) # Create 5 random examples with 2 feature each in range [-10, 10] net(input) [[-0.13601446] [ 0.26103732] [-0.05046433] [-1.2375476 ] [-0.15506986]] <NDArray 5x1 @cpu(0)> ## Parameters of a custom layer¶ Usually, a layer has a set of associated parameters, sometimes also referred as weights. This is an internal state of a layer. Most often, these parameters are the ones, that we want to learn during backpropogation step, but sometimes these parameters might be just constants we want to use during forward pass. All parameters of a block are stored and accessed via ParameterDict class. This class helps with initialization, updating, saving and loading of the parameters. Each layer can have multiple set of parameters, and all of them can be stored in a single instance of the ParameterDict class. On a block level, the instance of the ParameterDict class is accessible via self.params field, and outside of a block one can access all parameters of the network via collect_params() method called on a container. ParamterDict uses Parameter class to represent parameters inside of Apache MxNet neural network. If parameter doesn’t exist, trying to get a parameter via self.params will create it automatically. class NormalizationHybridLayer(gluon.HybridBlock): def __init__(self, hidden_units, scales): super(NormalizationHybridLayer, self).__init__() with self.name_scope(): self.weights = self.params.get('weights', shape=(hidden_units, 0), allow_deferred_init=True) self.scales = self.params.get('scales', shape=scales.shape, init=mx.init.Constant(scales.asnumpy()), differentiable=False) def hybrid_forward(self, F, x, weights, scales): weighted_data = F.FullyConnected(normalized_data, weights, num_hidden=self.weights.shape[0], no_bias=True) return scaled_data In the example above 2 set of parameters are defined: 1. Parameter weights is trainable. Its shape is unknown during construction phase and will be infered on the first run of forward propogation. 2. Parameter scale is a constant that doesn’t change. Its shape is defined during construction. Notice a few aspects of this code: • name_scope() method is used to add a prefix to parameter names during saving and loading • Shape is not provided when creating weights. Instead it is going to be infered from the shape of the input • Scales parameter is initialized and marked as differentiable=False. • F backend is used for all calculations • The calculation of dot product is done using F.FullyConnected() method instead of F.dot() method. The one was chosen over another because the former supports automatic infering shapes of inputs while the latter doesn’t. This is extremely important to know, if one doesn’t want to hard code all the shapes. The best way to learn what operators supports automatic inference of input shapes at the moment is browsing C++ implementation of operators to see if one uses a method SHAPE_ASSIGN_CHECK(*in_shape, fullc::kWeight, Shape2(param.num_hidden, num_input)); • hybrid_forward() method signature has changed. It accepts two new arguments: weights and scales. The last peculiarity is due to support of imperative and symbolic programming by HybridBlock. During training phase, parameters are passed to the layer by Apache MxNet framework as additional arguments to the method, because they might need to be converted to a Symbol depending on if the layer was hybridized. One shouldn’t use self.weights and self.scales or self.params.get in hybrid_forward except to get shapes of parameters. Running forward pass on this network is very similar to the previous example, so instead of just doing one forward pass, let’s run whole training for a few epochs to show that scales parameter doesn’t change during the training while weights parameter is changing. def print_params(title, net): """ Helper function to print out the state of parameters of NormalizationHybridLayer """ print(title) hybridlayer_params = {k: v for k, v in net.collect_params().items() if 'normalizationhybridlayer' in k } for key, value in hybridlayer_params.items(): print('{} = {}\n'.format(key, value.data())) net = gluon.nn.HybridSequential() # Define a Neural Network as a sequence of hybrid blocks scales = nd.array([2]))) # Add our custom layer net.initialize(mx.init.Xavier(magnitude=2.24)) # Initialize parameters of all layers net.hybridize() # Create, optimize and cache computational graph input = nd.random_uniform(low=-10, high=10, shape=(5, 2)) # Create 5 random examples with 2 feature each in range [-10, 10] label = nd.random_uniform(low=-1, high=1, shape=(5, 1)) mse_loss = gluon.loss.L2Loss() # Mean squared error between output and label trainer = gluon.Trainer(net.collect_params(), # Init trainer with Stochastic Gradient Descent (sgd) optimization method and parameters for it 'sgd', {'learning_rate': 0.1, 'momentum': 0.9 }) with autograd.record(): # Autograd records computations done on NDArrays inside "with" block output = net(input) # Run forward propogation print_params("=========== Parameters after forward pass ===========\n", net) loss = mse_loss(output, label) # Calculate MSE loss.backward() # Backward computes gradients and stores them as a separate array within each NDArray in .grad field trainer.step(input.shape[0]) # Trainer updates parameters of every block, using .grad field using oprimization method (sgd in this example) # We provide batch size that is used as a divider in cost function formula print_params("=========== Parameters after backward pass ===========\n", net) =========== Parameters after forward pass =========== hybridsequential94_normalizationhybridlayer0_weights = [[-0.3983642 -0.505708 -0.02425683 -0.3133553 -0.35161012] [ 0.6467543 0.3918715 -0.6154656 -0.20702496 -0.4243446 ] [ 0.6077331 0.03922009 0.13425875 0.5729856 -0.14446527] [-0.3572498 0.18545026 -0.09098256 0.5106366 -0.35151464] [-0.39846328 0.22245121 0.13075739 0.33387476 -0.10088372]] <NDArray 5x5 @cpu(0)> hybridsequential94_normalizationhybridlayer0_scales = [2.] <NDArray 1 @cpu(0)> =========== Parameters after backward pass =========== hybridsequential94_normalizationhybridlayer0_weights = [[-0.29839832 -0.47213346 0.08348035 -0.2324698 -0.27368504] [ 0.76268613 0.43080837 -0.49052125 -0.11322092 -0.3339738 ] [ 0.48665082 -0.00144657 0.00376363 0.47501418 -0.23885089] [-0.22626656 0.22944227 0.05018325 0.6166192 -0.24941102] [-0.44946212 0.20532274 0.07579394 0.29261002 -0.14063817]] <NDArray 5x5 @cpu(0)> hybridsequential94_normalizationhybridlayer0_scales = [2.] <NDArray 1 @cpu(0)> As it is seen from the output above, weights parameter has been changed by the training and scales not. ## Conclusion¶ One important quality of a Deep learning framework is extensibility. Empowered by flexible abstractions, like Block and HybridBlock, one can easily extend Apache MxNet functionality to match its needs.
2022-07-03 14:25:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23607726395130157, "perplexity": 3033.35910993467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104244535.68/warc/CC-MAIN-20220703134535-20220703164535-00504.warc.gz"}
https://leanprover-community.github.io/archive/stream/113488-general/topic/FTC.20and.20Integration.20.2F.20Differentiation.html
## Stream: general ### Topic: FTC and Integration / Differentiation #### James Arthur (Jun 30 2020 at 10:43): Hi All, I've decided to formalise some generalised trigonometric functions and more specifically the lower bound of a generalised sinc(x) (sin x /x) function. However, my function is defined as an integral. $\int_0^x\frac{dt}{(1 - t^p)^\frac{1}{p}}$ I have been told there is a statement of the FTC but not a proof of it, where is it? and how is it used, are there any examples of it? #### Chris Hughes (Jun 30 2020 at 12:19): You can get nice formatting if you use double dollar signs, $\int_0^x\frac{dt}{(1 - t^p)^\frac{1}{p}}$. #### Chris Hughes (Jun 30 2020 at 12:25): There's a statement of FTC here #1850, but I don't know too much about this part of the library. @Yury G. Kudryashov will know better than me. #### Yury G. Kudryashov (Jun 30 2020 at 14:29): I'm in the middle of refactoring integrals in mathlib. #### Yury G. Kudryashov (Jun 30 2020 at 14:31): Currently we have no definition of $\int_a^bf(x)\,dx$. We have a definition of $\int f(x)\,d\mu(x)$, and most theorems assume that $\mu$ is the canonical measure. We also have a definition of $\int_{x\in A}f(x)\,d\mu(x)$. #### James Arthur (Jun 30 2020 at 14:45): I presume that creating an integral like the $\int_a^b f(x)dx$ is too hard for me to try and create ? But if I was to go about it, would it just be riemann sums? The other definitions look cryptic and maybe not of use to me #### Chris Hughes (Jun 30 2020 at 14:47): Probably what will happen is that there will be a cryptic definition, but also a proof that it's the same as a Riemann integral whenever a Riemann integral exists. #### Yury G. Kudryashov (Jun 30 2020 at 16:33): The plan is to use the Lebesgue integral whenever possible. #### James Arthur (Jun 30 2020 at 16:46): Amazing, that makes sense given the definitions you have already given. You can transfer between a $\int{f(x)d\mu}$ form and a lebesgue relatively easily can't you? #### Yury G. Kudryashov (Jul 01 2020 at 03:15): What exactly do you mean by $\int f(x)\,d\mu$? #### James Arthur (Jul 01 2020 at 07:56): I was told its a measure theory integral. I was told something along the lines of: $\int{f(x)d\mu} = \int_0^\infty{g(x)dx}$ where $f$ and $g$ are related in some way. #### Yury G. Kudryashov (Jul 01 2020 at 07:57): "Lebesgue integral" = "measure theory integral". #### Yury G. Kudryashov (Jul 01 2020 at 07:59): Basically you have some density along the line, then you compute $\int_{-\infty}^{\infty}f(x)\rho(x)\,dx$ but the definition works for a measure with no density as well (e.g., if $\mu$ is the dirac measure at zero, then $\int f(x)\,d\mu(x)=f(0)$). #### James Arthur (Jul 01 2020 at 08:00): Oh sorry. I just thought it was an integral such that you take the rectangles horizontal instead of vertical. It makes sense why my lecturers moved my interest away from it so quickly. #### Yury G. Kudryashov (Jul 01 2020 at 08:01): "rectangles horizontal instead of vertical" is part of the formal definition. #### James Arthur (Jul 01 2020 at 08:02): I will have to look up the formal definition as that sounds amazing. Thats not implemented at the moment though, it's something that will be implemented in the future? #### Yury G. Kudryashov (Jul 01 2020 at 08:06): This is implemented as a general definition (i.e., for any measure space) but it lacks an interface making it useful for undergraduate calculus. #### James Arthur (Jul 01 2020 at 08:14): Thankyou, I'll just have to wait for the integrals to be useful. I doubt I could help formalise any of it. Last updated: May 10 2021 at 19:16 UTC
2021-05-10 19:46:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 14, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7677993774414062, "perplexity": 1252.1410598279788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991759.1/warc/CC-MAIN-20210510174005-20210510204005-00183.warc.gz"}
https://physics.stackexchange.com/questions/364580/what-is-the-role-of-determinant-and-trace-of-matrices-in-physics
# What is the role of determinant and trace of matrices in physics? [closed] There is vast area of physics where we have to use matrices.It is not only to do the mathematical problems in physics but also to produce a physical realization of an operation. I think matrices carry a huge amount of physics in symmetry operations. Again a matrix can be described by two numbers one is determinant and another one is trace. My question is what are the physical significances of DETERMINANT as well as TRACE? • Well, this is different on a case by case scenario. Matrices are indeed representations of operators (that are widely used in, say, Quantum Mechanics) but still this will be quite a narrow description of the why. – gented Oct 23 '17 at 15:23 • A matrix certainly cannot be described only by its determinant and its trace. For instance $$P_{12}=\left(\begin{array}{ccc} 0&1&0\\ 1&0&0\\ 0&0&1\end{array}\right)\, \qquad P_{13}= \left(\begin{array}{ccc} 0&0&1\\ 0&1&0\\ 1&0&0\end{array}\right)$$ have the same determinant and trace and are obviously distinct. – ZeroTheHero Oct 23 '17 at 15:24 • My question is not about the use of matrices in Quantum Mechanics or somewhere. – DIPANJAN HAZRA Oct 23 '17 at 15:33 • What is your question then? Your title is " Why the use of matrices is so important in physics". Moreover, you claim "Again a matrix can be described by two numbers one is determinant and another one is trace" which is incorrect. – ZeroTheHero Oct 23 '17 at 15:36 • The trace and the determinant are but two of a number of quantities invariant under conjugation of a matrix by a unitary transformation, i.e. under a change of basis. This is hardly enough to completely specify a matrix. For instance in $3\times 3$ there is another invariant (see math.stackexchange.com/a/807183/160660) . – ZeroTheHero Oct 23 '17 at 19:28 Well, there is not much to tell. The more physical you can get with determinant is the following: the determinant represents the "volume distorsion", which means it only tells you by how much your linear transformation will change the volume of a parallelogram. For instance, the matrix $2\mathbb{1}_{2\times2}$ tells you that the square of area 1 will be stretched to a square of area 4 (because $\mathrm{det}(2\mathbb{1}_{2\times2})=4$). More generally, this is true for any dimension, and also for every kind of linear transformation. Think of the jacobian when performing a coordinate transformation in an integral. The measure has to change since the coordinate transformation may change the volume of an infinitesimal n-dimensional parallelogram. Also, that is why we use in quantum mechanics SO(n) or SU(n) as symmetry groups; their determinant is 1 so that the volume is "conserved" when we rotate things. I can think about $2\times 2$ matrices to describe a set of two linear equations. These systems of two linear equations are extremely common in physics. Here is a useful relation between determinant and trace. The linear Lie Group transformations done in physics are of the form $M=e^\Theta$ where both $M$ and $\Theta$ are matrices. Then it is true that: $$det(M)=1 \quad if \ and \ only \ if \quad Trace(\Theta)=0$$ This is how the det=1 condition, signified by the "S" in SO(n), SU(n), and SL(n), turns into a constraint on the coordinates and generators of the group.
2019-12-06 18:03:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8975430130958557, "perplexity": 242.91977269320083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540490743.16/warc/CC-MAIN-20191206173152-20191206201152-00256.warc.gz"}
https://www.coursehero.com/file/34714120/sol-7-2pdf/
# Exercise 7113 : Let T be a linear operator on a... • Homework Help • 13 • 100% (1) 1 out of 1 people found this document helpful This preview shows page 1 - 2 out of 13 pages. Section 7.1 Exercise 7.1.13 : Let T be a linear operator on a finite-dimensional vector space V such that the charac- teristic polynomial of T splits, and let λ 1 , λ 2 , . . . , λ k be the distinct eigenvalues of T . For each i , let J i be the Jordan canonical form of the restriction of T to K λ i . Prove that J = J 1 J 2 . . . J k is the Jordan canonical form of T . Remark: The statement of the exercise in the book directs you to prove that “ J is the Jordan canonical form of J .” Although that statement actually makes sense (it amounts to saying, “Prove that the matrix J is in Jordan canonical form”), I’m guessing the book intended to state “. . . the Jordan canonical form of T.” The question is also somewhat confusing because it refers to “ the Jordan canonical form,” while the uniqueness of the Jordan canonical form up to ordering of Jordan blocks is not established until the next section of the text. Proof. We first observe that the matrix J is in Jordan canonical form (cf. the remark above). Indeed, by the definition of the direct sum of square matrices on page 320, J is “block diagonal” with diagonal blocks J i . By definition, each J i is in Jordan canonical form; thus, each J i is itself block diagonal with Jordan diagonal blocks. So—by “refining” the block-diagonal decomposition of J —we see that J is a block-diagonal matrix with Jordan diagonal blocks. (Alternatively (and equivalently), note that each J i , being in Jordan canonical form, is a direct sum of Jordan blocks in the sense defined on page 320. Thus J is a direct sum of Jordan blocks and is thus in Jordan canonical form. Strictly speaking, one ought to observe that the direct sum of square matrices is an associative operation, so that ( A B ) C = A B C .) It remains to show that J is a matrix representation of T, i.e. , J = [T] β for some ordered basis β of V . For each 1 i k , let β k be a Jordan canonical basis of K λ i with respect to which J i is the matrix representation of the restriction of T to K λ i . We claim β = β 1 . . . β k is the desired basis of V : We first observe that β is in fact a basis of V . By Theorem 7.8 (which was proven in lecture), we have V = k M i =1 K λ i . Theorem 5.10(d) then shows that β is a basis of V . That J = [T] β then follows from Theorem 5.25. (Writing out a proof of Theorem 5.25 in terms of the “entry-by-entry” definition of page 320 is somewhat instructive but quite tedious. At the least, you should be able to explain to someone in words why that theorem is true.) 1
2021-12-07 03:51:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9767589569091797, "perplexity": 325.00535770275786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363332.1/warc/CC-MAIN-20211207014802-20211207044802-00406.warc.gz"}
http://mathoverflow.net/questions/60075/connectivity-of-the-erdosrenyi-random-graph/60091
# Connectivity of the Erdős–Rényi random graph It is well-known that if $\omega=\omega(n)$ is any function such that $\omega \to \infty$ as $n \to \infty$, and if $p \ge (\log{n}+\omega) / n$ then the Erdős–Rényi random graph $G(n,p)$ is asymptotically almost surely connected. The way I know how to prove this is (1) first counting the expected number of components of order $2, 3, \dots, \lfloor n/2 \rfloor$, and seeing that the expected number is tending to zero. Then (2) showing the expected number of isolated vertices is also tending to zero. This approach also allows more precise results, such as: if $p = (\log{n}+c) / n$ with $c \in \mathbb{R}$ constant, then Pr$[G(n,p)$ is connected] $\to e^{-e^{-c}}$ as $n \to \infty$, which follows once we know that in this regime the number of isolated vertices is approaching a Poisson distribution with mean $e^{-c}$. I am wondering if it is possible to give an easier proof (of a coarser result) along the following lines. There are $n^{n-2}$ spanning trees on the complete graph, and $G$ is connected if and only if one of these trees appears. So the expected number of spanning trees is $n^{n-2}p^{n-1}$. One might expect that if this function is growing quickly enough, then with high probability $G(n,p)$ is connected. I think I remember reading somewhere that this approach doesn't quite work --- for example the variance is too large to apply Chebyshev’s inequality. What I am wondering is if there is some way to fix this if we are willing to make $p$ a little bit bigger. In particular, what about $p = C \log{n} / n$ for some large enough constant $C > 1$, or even $p = n^{-1 + \epsilon}$ for fixed but arbitrarily small $\epsilon >0$? - A nice question. Here's a strategy that occurs to me, though it could fail miserably. The basic problem seems to be what you said about variance: the appearances of different spanning trees are far from independent, since it is possible to make local modifications to a spanning tree and get another one. (For example, if x is a leaf joined to y, which is joined only to z, then we can replace the path zyx by the path zxy.) One way we might try to defeat this is to choose a random set $\Sigma$ of spanning trees, where each spanning tree is chosen independently with probability $\alpha^{n-1}$ for some carefully chosen $\alpha$ (which I imagine as a small negative power of $n$). Then the expected number of trees from $\Sigma$ in a $p$-random graph is $(\alpha p)^{n-1}n^{n-2}$, which is pretty large even when $p$ is pretty close to $n^{-1}$. But now we might expect that any two trees in $\Sigma$ are quite well-separated, so perhaps it is possible to get a decent estimate for the variance. Actually, it's not clear to me what passing to the random set really achieves here: maybe a simpler method (but not wholly simple) is to work out the expected number of pairs of spanning trees by carefully classifying what they can look like. The hope would be that if you pick one tree at random, then the proportion of trees that overlap with it to any great extent is usually so small that the expected number of pairs is not significantly bigger than the square of the expected number of spanning trees. With $p=n^{-1+\epsilon}$ something like this might work, but you've probably already thought about this. - The expected number of spanning trees becomes large when $p > 1/n$, whereas the expected number of Hamilton paths becomes large when $p > e/n$. Controlling the possible kinds of overlap between two Hamilton paths is much easier than for general trees, so if the method you describe in the last paragraph is going to work it would probably be much easier to implement after restricting attention to Hamilton paths. The best bound one could then hope for would be above the threshold for Hamiltonicity, but this is only a little bigger than the threshold for connectivity anyway. –  Louigi Addario-Berry Mar 31 '11 at 8:26 If so, then with $p=\frac{3*n}{2}$ the expected number of edges is only $\frac{3(n-1)}{4}$ which is nowhere near enough for even one spanning tree but the expected number of spanning trees is still large because of a sort of St. Petersburg paradox. –  Aaron Meyerowitz Mar 31 '11 at 16:24 Thanks for this answer, and also for the comment Louigi. Thinking about paths was fruitful, and it looks like one can make the following approach work: Set $p \gg \log^2{n}/n$, and consider the number of paths of length $\approx \log{n}$ between a pair of vertices $x$ and $y$. One can get that the expected number of such paths is large, but since the paths are short they don't intersect too often, and then Janson's inequality (for example) ensures that every pair of vertices is connected by such a path with high probability. –  Matthew Kahle Apr 20 '11 at 0:48 Here is an easy proof for $C>2$, and a fairly easy proof for $C>1$. Define a cut of a graph $G$ to be a partition of the vertices of $G$ into two sets which are crossed by no edges. So a graph has a nontrivial cut if and only if it is disconnected. We will show that the expected number of cuts of $G$ goes to $0$, so that, with probability $1$, our graph is connected. For any particular partition of the vertices of $G$ into two sets, of size $k$ and $n-k$, the probability that this partition is a cut is $(1-p)^{k(n-k)} \binom{n}{k}$. So the expected number of cuts is $$\sum_{k=1}^{n/2} (1-p)^{k(n-k)} \binom{n}{k}.$$ We only have to go up to half way, because we can always take $k$ to be the smaller half of the cut. (I'm going to be sloppy and write non-integer bounds for my summations, as I've done here. You can fix it, if you like.) For $C>2$, we have the following crude bound $$\sum_{k=1}^{n/2} (1-p)^{k(n-k)} \binom{n}{k} \leq \sum_{k=1}^{n/2} e^{-p k(n-k)} n^k.$$ The $\log$ of the summand is $$-k (n-k) \frac{C \log n}{n} + k \log n = k \log n \left( 1-C(1-k/n) \right) \leq k \log n (1-C/2)$$ So, if $C>2$, we are bounded by $\sum_{k=1}^{n/2} e^{(1-C/2) k \log n}$. This is a geometric series, whose sum is easily seen to be bounded by a constant multiple of its leading term; namely $n^{(1-C/2)}$. So the sum goes to $0$ and we are done. Now, what if $C>1$, but not as large as $2$? Let $a$ be a real number such that $1-C(1-a) < 0$. The preceeding argument shows that the contribution of the terms with $k<an$ is negligible. (If $C\leq 1$, there is no such number and this proof breaks.) We now consider the remaining terms, and use a different crude bound: $$\sum_{k=an}^{n/2} (1-p)^{k(n-k)} \binom{n}{k} \leq \sum_{k=an}^{n/2} e^{-pk(n-k)} 2^n$$ Again, the log of the summand is $$-k(n-k) \frac{C \log n}{n} + n \log 2 = -(k/n) (1-k/n) C n \log n + n \log 2 \leq - a (1-a) C n \log n + n \log 2$$ This is the log of an individual summand; we have to add up $(1/2 -a) n < n$ of them. So the sum is bounded by $$n e^{-a(1-a) C \log n + n \log 2} =e^{-a(1-a) C \log n + n (\log 2+1)}.$$ The $n \log n$ overwhelms the $n$, so we are done. - David, thanks for the reply. However the proof you gave is exactly the proof I describe above --- counting nontrivial cuts is equivalent to counting small connected components. (And this proof can be refined to give the more precise result I described.) What I am looking for a proof that instead uses the fact that directly makes use of the fact that the expected number of spanning trees is tending to infinity very quickly. –  Matthew Kahle Mar 31 '11 at 17:37 Can you estimate how fast the number of spanning trees tends to infinity if $p=\frac{\log n}{n}$. I'd expect "really fast" although the chance of being connected is only about $1/3$. You would have to beat that rate. –  Aaron Meyerowitz Apr 1 '11 at 20:46 Dear Mathew, this is not really an answer to your question but just a related matter. As you point out the expected number of trees in a random graph is 1 already when p=c/n and is very large when p=logn/n so the hope is that this can be used to show that with a large probability the random graph contains a tree. There is a collection of conjectured by Jeff Kahn and me trying to suggest a very general connection of this type. These conjectures are presented in the pape Thresholds and expectation thresholds by Kahn and me and are mentioned in this MO question. If true these conjectures will imply that the threshold for connectivity will be below logn/n (of course, we do not need it for this case...), and the proof will probably will at best be much much more complicated then existing proofs. I should mention that the sharp threshold property which was proved by Erdos and Renyi for connectivity can be proved (with harder proofs) from more general principles: One is the Margulis-Talagrand theorem which applies to the threshold for random subgraphs of highly edge connected graphs and one is Friedgut's result which identify graph properties with coarse thresholds. - With $p=\frac{c}{n}$ the expected number of spanning trees may indeed grow exponentially (although perhaps not as fast as $\frac{c^{n-1}}{2n}$ ) while the probability of being connected goes exponentially to $0$. Maybe with $p=\frac{\log n-\log \log n}{n}$ the chance of being connected decreases like $e^{-n}$ but the number of spanning trees of the single large component grows something like $(\log n)^{n}$. Then the rare occasions that the giant component is the whole graph (i.e. the graph is connected) would still make the expected number of spanning trees grow something like $(\frac{\log n}{e})^n$ which is superexponential.
2014-04-17 07:26:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8995546102523804, "perplexity": 127.09024383323165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/146074/logic-translation-involving-the-existential-quantifier-and-such-that?answertab=votes
# Logic translation involving the existential quantifier and “such that” A: "There exists an integer greater than 5 such that it is less than 10" B: "There exists an integer such that it is greater than 5 and less than 10." C: "There exists an integer less than 10 such that it is greater than 5." D: "There exists an integer such that it is less than 10 and greater than 5." I know that A can be translated to B (likewise C to D). B and D are obviously equivalent, whereas I don't think A and C are. Thus, I believe that A implies B but not vice-versa (likewise C implies D but not vice-versa). But is this correct?? - Why do you think any of them are logically unequivalent? – anon May 17 '12 at 1:03 @anon Since the part after "such that" is the clause, I don't think "...X such that Y" is equivalent to "...Y such that X". But am I confusing the linguistics of the statement with its logical interpretation? – Ryan May 17 '12 at 1:29 First off, all four statements are clearly true and implication is truth-functional... but let's ignore this. What do you think X such that Y means if $X$ and $Y$ are generic propositions? For example, "It is two o'clock such that apples are red." That's not the real form of these statements. Rather, let $\Bbb Z$ denote the integers, $P(x)$ the claim "$x>5$" and $Q(x)$ the claim "$x<10$," and finally the sets $A=\{x:P(x)\}$ and $B=\{x:Q(x)\}$. Now try to rephrase the four items with set inclusions, the two propositional functions and the existential quantifier and see how malleable they are. – anon May 17 '12 at 1:29 They're the same. "Such that" would be the colon in my interpretation of A: $$\exists n>8 : (n<15)$$ However, in this case "n>8" is itself a statement, so what we're really saying is that there exists some n that satisfies both the conditions: $$\exists n : (n>8) \wedge (n<15)$$ Because "and" is commutative, and "such that" simplifies to "and" in this case, all of the statements are equivalent. Before you ask, "for all" quantifier is different. If you say the following: $$\forall n>8 : n<15$$ what you're really saying is that any $n$ greater than 8 implies that $n$ is less than 15, or: $$\forall n :(n>8) \rightarrow (n<15)$$ This is the reason why the statements for all x such that x>0, x<10 and for all x such that x<10, x>0 are not the same. However, the statement: there exists an x greater than zero such that x is less than ten is equivalent to there exists an x less than ten such that x is greater than zero There are lots of linguistic issues like this. "Such that" and "but" are the ones I've had the most trouble with. - Yes, I'm aware of the if-then translation for the "for all" quantifier and that "but" is, amusingly enough, translated as "and". But the reason why I've been struggling with "such that" is because the set {2,3,4}={x: x=2,3,4} cannot be written as {x=2,3,4: x}. So my mind is fixated on the notion that you cannot switch around the preceding and succeeding parts around the colon/"such that" ! Thanks for clarifying that I'm indeed confusing the linguistics of the "such that" statements with their logical interpretation! – Ryan May 17 '12 at 1:41
2016-02-07 01:39:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8364711999893188, "perplexity": 547.5136885149062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148428.26/warc/CC-MAIN-20160205193908-00211-ip-10-236-182-209.ec2.internal.warc.gz"}
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2014-August/msg00097.html
# [isabelle] Getting cong to work with carriers Working with abstract algebra (with locales) requires always showing elements are in the carrier before concluding anything with them. Using type classes, this would be automatic from type-checking. I feel like there should be a way to make it as automatic with carriers by declaring the right simp rules, but so far I've had no success, and have to continually pipe in facts about elements being in the carrier every time I want to prove something. Does anyone know a way to get around this? I can get it to do basic things like: if a, b are in the carrier then a*b+c is in the carrier. But I am working with finite sums, where congruence rules and not just simp rules are required. I've set up a baby example to see where the problem is. Suppose I have a simp rule and congruence rules (think of F being like sum or finsum, for instance) lemma *simp_rule*: fixes x assumes "x∈S" shows "a x = b x" sorry lemma *rule_cong'*: "[|A=B; !! x. x∈A ==> f x = g x|] ==> F A f = F B g" sorry (*cong*) lemma *rule_cong*: "[|A=B; !!x. x∈A =simp=> f x = g x|] ==> F A f = F B g" sorry Then Isabelle successfully solves the following, as expected (by simplifying the LHS to F S b, then the RHS also to F S b). lemma *test_cong'*: "F S (λx. if (x∈S) then b x else 0) = F S a" *by (auto cong: rule_cong simp add: simp_rule)* But Isabelle stumbles on the following: lemma* test_congT'*: "F T (λx. if (x∈S) then b x else 0) = F T a" proof - have 1:"T⊆S" by (unfold T_def S_def, auto) from 1 show ?thesis apply *(auto cong: rule_cong simp add: simp_rule elim!: subsetD)* *(*fails* - also with cong/cong' elim!/intro**)* *(*subsetD: ?A ⊆ ?B ==> ?c ∈ ?A ==> ?c ∈ ?B*)* The problem is that applying the congruence rule it fails to show x\in S. With the assumption T⊆S and x∈T and subsetD declared as an elim rule explicitly, it should conclude x∈S. Why doesn't it, and how can I get it to do this? The application is that F is finsum/finprod and the expression inside simplifies *on the condition that x**∈carrier G*, while the set T is a *subset* of carrier G For easy experimentation, here is the file: https://dl.dropboxusercontent.com/u/27883775/work/Isabelle/TestTactics.thy -Holden This archive was generated by a fusion of Pipermail (Mailman edition) and MHonArc.
2021-12-03 00:22:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7229475378990173, "perplexity": 5604.299507163966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362571.17/warc/CC-MAIN-20211203000401-20211203030401-00254.warc.gz"}
https://nbviewer.org/github/Dyalog/dyalog-jupyter-notebooks/blob/master/Boolean%20Scans%20and%20Reductions.ipynb
# Boolean Scans and Reductions¶ The content of this notebook was presented in Dyalog Webinars: Boolean Scans and Reductions. Logical values are the integers 0 and 1, in contrast to distinct true and false values you might find in other programming languages. ## Utility¶ Boolean data is widely used in APL. Results of comparison functions: In [1]: _ ← ,⍥⊆ ⍝ Stranding function ¯2 ¯1 0 1 2 (↑ < _ ≤ _ = _ ≠ _ ≥ _ >) 0 Out[1]: 1 1 0 0 0 1 1 1 0 0 0 0 1 0 0 1 1 0 1 1 0 0 1 1 1 0 0 0 1 1 Data-driven conditionals. For example, "increase the salaries in group A by 5%": In [2]: groups ← 'ABACC' salaries ← 20000 25750 21000 32350 32400 salaries × 1.05 * groups = 'A' Out[2]: 21000 25750 22050 32350 32400 Selecting from arrays: In [3]: values ← 4 10 6 8 16 24 names←'Anne' 'Ben' 'Charlie' 'Demi' 'Ella' 'Fiona' (values≥10)/names Out[3]: ┌───┬────┬─────┐ │Ben│Ella│Fiona│ └───┴────┴─────┘ Simple statistics: In [4]: (+⌿÷≢)'the alphabet'∊'aeiou' Out[4]: 0.333333 ## Sixteen logical functions¶ With modern APL, all sixteen logical functions for 2-bit inputs can be represented by either primitives, atops or constant functions. The descriptions given below are either common names for digital logic gates or mnemonic descriptions of the functionality with regards to left and right Boolean arguments. Binary Decimal 2⊥ Function f Description 0 0 0 0 0 0⍨ FALSE 1 0 0 0 1 ∧ AND 1 0 0 0 2 > Left but not right 1 1 0 0 3 ⊣ Left 1 0 0 0 4 < Right but not left 1 0 1 0 5 ⊢ Right 1 1 0 0 6 ≠ Exlusive OR 1 1 1 0 7 ∨ OR 1 0 0 0 8 ⍱ NOR 1 0 0 1 9 = Exclusive NOR 1 0 1 0 10 ~⍤⊢ Not right 1 0 1 1 11 ≥ Left OR not right 1 1 0 0 12 ~⍤⊣ Not left 1 1 0 1 13 ≤ Right OR not left 1 1 1 0 14 ⍲ NAND 1 1 1 1 15 1⍨ TRUE Note Phil Last's article gives scalar dfn definitions, whereas some definitions in the previous table are not scalar functions. ## Scans and reductions¶ Reductions summarise some property: In [5]: +/4 3 1 5 ⍝ Sum Out[5]: 13 Scans indicate a progression of that property along the array: In [6]: +\4 3 1 5 ⍝ Cumulative sum Out[6]: 4 7 8 13 Some reductions, and their related scans, are well known to APLers: In [7]: ∧/1 1 1 1 1 1 1 1 ⍝ Are all true? ∧\1 1 0 1 1 1 0 1 ⍝ Were all true so far? ∨/0 0 1 0 0 0 0 0 ⍝ Are any true? ∨\0 0 1 0 0 0 1 0 ⍝ Were any true so far? Out[7]: 1 Out[7]: 1 1 0 0 0 0 0 0 Out[7]: 1 Out[7]: 0 0 1 1 1 1 1 1 ## The last value¶ The value of a vector reduction f/⍵ is always the last value in the related scan f\⍵. In [8]: +/3 4 1 5 ⊃⌽+\3 4 1 5 Out[8]: 13 Out[8]: 13 In general, properties of boolean vectors for which 1≡f/⍵ for a function f reflect properties in the related scans f\⍵. From here we will take a quick look at a couple of particular scans and reductions which have some interesting applications. At the end of this notebook is a table of reductions for the sixteen logical functions, as well as some references for further reading. ## Less-than¶ Less-than-scan leaves only the first true bit on, and all others are turned off: In [9]: <\0 0 1 0 1 1 1 1 1 1 0 1 0 1 1 Out[9]: 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 This is because the reduction only returns true when the last bit is the one and only true bit: In [10]: </0 0 0 1 Out[10]: 1 Too see the relationship, view the scan as a progression of reductions: In [11]: ⍝ <\0 1 0 1 1 0 0<1 0<1<0 0<1<0<1 0<1<0<1<1 Out[11]: 0 Out[11]: 1 Out[11]: 0 Out[11]: 0 Out[11]: 0 An example use is to mark the start of end-of-line comments: In [12]: code ← '+/⍳10 ⍝ sum⍝of⍝first⍝10⍝ints' c ← code = '⍝' ↑ code c (<\c) Out[12]: + / ⍳ 1 0 ⍝ s u m ⍝ o f ⍝ f i r s t ⍝ 1 0 ⍝ i n t s 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 The pair-wise reduction can be used to mark the start of groups of similar cells: In [13]: v ← '⍟⎕⍟⍟⍟⎕⎕⍟⎕⎕⍟⍟⍟⍟⎕⎕⎕⎕⍟⍟⎕⎕⎕⎕⍟⍟⎕⍟' q ← '⎕'=v ↑v q (2</0,q) Out[13]: ⍟ ⎕ ⍟ ⍟ ⍟ ⎕ ⎕ ⍟ ⎕ ⎕ ⍟ ⍟ ⍟ ⍟ ⎕ ⎕ ⎕ ⎕ ⍟ ⍟ ⎕ ⎕ ⎕ ⎕ ⍟ ⍟ ⎕ ⍟ 0 1 0 0 0 1 1 0 1 1 0 0 0 0 1 1 1 1 0 0 1 1 1 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 ## Not-equal¶ The not-equal reduction ≠/ only returns true if there are an odd number of 1s In [14]: ≠/1 1 1 1 2|+/1 1 1 1 ≠/1 1 1 0 2|+/1 1 1 0 Out[14]: 0 Out[14]: 0 Out[14]: 1 Out[14]: 1 The related scan ≠\ then has the property that odd and even instances of 1s are joined by a series of 1s In [15]: b ← 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 ↑ b (≠\b) Out[15]: 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 1 1 1 1 0 0 0 0 0 1 1 1 0 Again, looking at the progression can help us to see how this works: In [16]: ⍝ ≠\0 1 0 0 1 0 0≠1 0≠1≠0 0≠1≠0≠0 0≠1≠0≠0≠1 Out[16]: 0 Out[16]: 1 Out[16]: 1 Out[16]: 1 Out[16]: 0 An example use of this is to mark quoted parts of text: In [17]: quoted ← 'extract the "quoted" parts from this "text"' q ← '"'=quoted inc ← (~∧≠\)'"'=quoted ⍝ Including quote marks exc ← (⊢∨≠\)'"'=quoted ⍝ Exclusing quote marks ↑ quoted q inc exc Out[17]: e x t r a c t t h e " q u o t e d " p a r t s f r o m t h i s " t e x t " 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 These masks can easily be used to isolate the quoted parts: In [18]: inc/quoted exc⊆quoted Out[18]: quotedtext Out[18]: ┌────────┬──────┐ │"quoted"│"text"│ └────────┴──────┘ With a bit more effort, we can mark C-style (Java, JavaScript, CSS etc.) comments: In [19]: css ← '* {color: #000; /* text */ background: #fff; /* bg */}' cpos ← '/*'∘(≠\⍷∨¯1⌽∘⌽⍷∘⌽)css ↑ css cpos Out[19]: * { c o l o r : # 0 0 0 ; / * t e x t * / b a c k g r o u n d : # f f f ; / * b g * / } 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 Some of these examples, and more, can be found on APLcart. For one last example, let's use something like a data table with successive fields of different lengths: In [20]: data ← (?9⍴100),(4 2 3⌿3 3⍴3/'ABC'),⍪?9⍴0 In data, there is a field of length 4 containing the 'A' markers, followed by a field of length 2 with the 'B' markers and finally a field of length 3 with the 'C' markers. In [21]: ⎕←data Out[21]: 50 AAA 0.127932 26 AAA 0.528921 10 AAA 0.116772 50 AAA 0.320547 64 BBB 0.377289 90 BBB 0.54566 68 CCC 0.339044 98 CCC 0.145094 66 CCC 0.63628 We would like to present the data above in a more human-readable format, so we are going to insert two blank lines between each field. To start with, let's create some partition vectors. The function MTake creates a boolean vector where 1 indicates the start of a field: MTake has a neat, nested-array style definition: In [22]: MTake ← ∊↑∘1¨ MTake 4 2 3 Out[22]: 1 0 0 0 1 0 1 0 0 A more traditional definition has performance benefits for larger data: In [23]: MTake ← {¯1⌽(⍳+/⍵)∊+\⍵} MTake 4 2 3 Out[23]: 1 0 0 0 1 0 1 0 0 If f indicates a field, and b indicates a series of blank lines, 1 0 0 0 1 0 1 0 1 0 1 0 0 1 0 ←--f--→ ←b→ ←f→ ←b→ ←-f-→ ←b→ we can use catenate-rank-zero to interleave the blank lines: In [24]: MTake,4 2 3,⍤0⊢2 Out[24]: 1 0 0 0 1 0 1 0 1 0 1 0 0 1 0 And use not-equal-scan to switch sections on and off, creating an expansion vector: In [25]: ⎕←e←≠\MTake,4 2 3,⍤0⊢2 Out[25]: 1 1 1 1 0 0 1 1 0 0 1 1 1 0 0 Which we then use to expand a character matrix format of our data: In [26]: e⍀⍕data Out[26]: 50 AAA 0.127932 26 AAA 0.528921 10 AAA 0.116772 50 AAA 0.320547 64 BBB 0.377289 90 BBB 0.54566 68 CCC 0.339044 98 CCC 0.145094 66 CCC 0.63628 A similar technique can be used to also label our fields: In [27]: labels ← (MTake 4 2 3)⍀(↑'Field '∘,¨'ABC') e⍀⍕labels,data Out[27]: Field A 50 AAA 0.127932 26 AAA 0.528921 10 AAA 0.116772 50 AAA 0.320547 Field B 64 BBB 0.377289 90 BBB 0.54566 Field C 68 CCC 0.339044 98 CCC 0.145094 66 CCC 0.63628 ## Flat partition¶ The STSC publication Boolean Functions and Techniques contains a wealth of information on Boolean functions and techniques (duh). In particular, there is a description of a general procedure for applying some function to parts of an array independently, where parts is defined similarly to the fields in the previous example. The general procedure involves a loop. With nested arrays, the pattern is quite easy to show: In [28]: 1 0 0 0 1 0 0 {∊⌽¨⍺⊂⍵} 'ABCDXYZ' ⍝ Reverse two parts independently Out[28]: DCBAZYX In [29]: 1 0 0 0 1 0 0 {∊+/¨⍺⊂⍵} ⍳7 ⍝ Sum two parts independently Out[29]: 10 18 This pattern is easily turned into a dop: In [30]: _P ← {∊⍺⍺¨⍺⊂⍵} ⍝ A partitioned-function-application operator p ← 1 0 0 0 1 0 0 p ⌽ _P 'ABCDXYZ' p +/_P ⍳7 Out[30]: DCBAZYX Out[30]: 10 18 However, this looping approach can become inefficient for large arrays with many parts. Page 10 of Boolean Functions and Techniques gives some definitions for more array-oriented solutions for use with specific functions. Here we will look at a couple of these, starting with partitioned-reverse PREVERSE. The plus-scan is useful with these types of Boolean partitioned vectors, as it causes partitions to be marked by groups of successive integers: In [31]: a ← 'ABCDXYZ' ↑ a p (+\p) Out[31]: A B C D X Y Z 1 0 0 0 1 0 0 1 1 1 1 2 2 2 If we downgrade the plus-scan, the relative positions of indices for each partition relative to the whole array are reversed, but the positions of indices within each partition stay in ascending order: In [32]: ⍒+\p Out[32]: 5 6 7 1 2 3 4 Reversing the plus-scan then restores the relative positions of partitions within the whole array, but the indices within each partition are reversed: In [33]: ⌽⍒+\p Out[33]: 4 3 2 1 7 6 5 Indexing with this vector creates the desired effect: In [34]: a[⌽⍒+\p] Out[34]: DCBAZYX These kinds of techniques often give performance improvements compared to their general nested-array counterparts: p ← 1,1↓1=?1e4⍴10 t ← ⎕A[?1e4⍴26] ]runtime -c "p ⌽_P t" "p {⍵[⌽⍒+\⍺]} t" p ⌽_P t → 1.4E¯4 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ p {⍵[⌽⍒+\⍺]} t → 3.5E¯5 | -75% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ The partitioned sum starts with the plus-scan of the whole vector: In [35]: +\⍳7 Out[35]: 1 3 6 10 15 21 28 We then extract the progressive sums up to the end of each partition: In [36]: (1⌽p)/+\⍳7 Out[36]: 10 28 Finally subtract each partition's sum from the sum of its preceding partition: In [37]: ¯2-/0,(1⌽p)/+\⍳7 Out[37]: 10 18 Once again, there is a speedup relative to the general case: p ← 1,1↓1=?1e4⍴10 i ← ?1e4⍴1000 ]runtime -c "p +/_P i" "p {¯2-/0,(1⌽⍺)/+\⍵} i" p +/_P i → 6.8E¯5 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ p {¯2-/0,(1⌽⍺)/+\⍵} i → 8.9E¯6 | -87% ⎕⎕⎕⎕⎕ ## Efficient implementation¶ Some languages implement boolean types as bytes (8 bits). The decision to use bit representations was made by Larry Breed for the first implementation of APL. Not only do boolean arrays use the smallest amount of memory possible, they are also subject to SIMD parallelisations - even on hardware without special SIMD capabilities. More on this topic can be found on the APL Wiki article on Booleans. ## Sixteen boolean reductions¶ Finally let's return to the table of Boolean functions from earlier. In the Vector article, Phil Last gives plain English descriptions for properties of boolean vectors when their reduction using a Boolean function f gives 1. Function f f/⍵ is true if ⍵ satisfies: 0⍨ Never ∧ All ones > Odd leading ones ⊣ First is one < Last is the only one ⊢ Last is one ≠ Odd ones ∨ At least one one ⍱ Odd leading zeros else the last is the only one = Even zeroes ~⍤⊢ Last is parity of the length ≥ Even leading zeros ~⍤⊣ First is zero ≤ Last is not the only zero ⍲ Even leading ones else last is the only zero 1⍨ Always For those not described in this notebook, you might want to experiment and show yourself the relationships between the descriptions given in the table, and the properties of the scans f\⍵. APL expressions offering similar insights are given on page 23 of the Boolean Functions and Techniques publication.
2022-12-03 15:13:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4192962348461151, "perplexity": 846.3531084683223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710933.89/warc/CC-MAIN-20221203143925-20221203173925-00277.warc.gz"}
https://arpi.unipi.it/handle/11568/245396
We characterize the autonomous, divergence-free vector fields $b$ on the plane such that the Cauchy problem for the continuity equation $\partial_t u + div(bu) = 0$ admits a unique bounded solution (in the weak sense) for every bounded initial datum; the characterization is given in terms of a property of Sard type for the potential $f$ associated to $b$. As a corollary we obtain uniqueness under the assumption that the curl of $b$ is a measure. This result can be extended to certain non- autonomous vector fields $b$ with bounded divergence. ### A uniqueness result for the continuity equation in two dimensions #### Abstract We characterize the autonomous, divergence-free vector fields $b$ on the plane such that the Cauchy problem for the continuity equation $\partial_t u + div(bu) = 0$ admits a unique bounded solution (in the weak sense) for every bounded initial datum; the characterization is given in terms of a property of Sard type for the potential $f$ associated to $b$. As a corollary we obtain uniqueness under the assumption that the curl of $b$ is a measure. This result can be extended to certain non- autonomous vector fields $b$ with bounded divergence. ##### Scheda breve Scheda completa Scheda completa (DC) 2014 Alberti, Giovanni; Stefano, Bianchini; Gianluca, Crippa File in questo prodotto: File abc-uniqueness-v19.2.pdf accesso aperto Tipologia: Documento in Post-print Licenza: Creative commons Dimensione 439.94 kB Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/245396
2023-03-20 09:38:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.815510094165802, "perplexity": 623.0481784282719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943471.24/warc/CC-MAIN-20230320083513-20230320113513-00113.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/introductory-algebra-for-college-students-7th-edition/chapter-8-section-8-3-operations-with-radicals-exercise-set-page-592/91
## Introductory Algebra for College Students (7th Edition) Perimeter=$2+2\sqrt2$ inches Area=$1$ square inch Perimeter is the sum of the lengths of the sides. $P=2+\sqrt2+\sqrt2$ Combine like radicals. $P=2+2\sqrt2$ The area of a triangle is one half the product of the base and height. $A=\frac{1}{2}bh$ $A=\frac{1}{2}(\sqrt2)(\sqrt2)$ Use the product rule for square roots. $A=\frac{1}{2}\times2$ Simplify. $A=1$
2018-11-18 18:09:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7671777606010437, "perplexity": 467.41204825978184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744561.78/warc/CC-MAIN-20181118180446-20181118202446-00192.warc.gz"}
https://nebusresearch.wordpress.com/tag/savage-chickens/
Reading the Comics, March 14, 2022: Pi Day Edition As promised I have the Pi Day comic strips from my reading here. I read nearly all the comics run on Comics Kingdom and on GoComics, no matter how hard their web sites try to avoid showing comics. (They have some server optimization thing that makes the comics sometimes just not load.) (By server optimization I mean “tracking for advertising purposes”.) Pi Day in the comics this year saw the event almost wholly given over to the phonetic coincidence that π sounds, in English, like pie. So this is not the deepest bench of mathematical topics to discuss. My love, who is not as fond of wordplay as I am, notes that the ancient Greeks likely pronounced the name of π about the same way we pronounce the letter “p”. This may be etymologically sound, but that’s not how we do it in English, and even if we switched over, that would not make things better. Scott Hilburn’s The Argyle Sweater is one of the few strips not to be about food. It is set in the world of anthropomorphized numerals, the other common theme to the day. John Hambrook’s The Brilliant Mind of Edison Lee leads off with the food jokes, in this case cookies rather than pie. The change adds a bit of Abbott-and-Costello energy to the action. Mick Mastroianni and Mason Mastroianni’s Dogs of C Kennel gets our first pie proper, this time tossed in the face. One of the commenters observes that the middle of a pecan pie can really hold heat, “Ouch”. Will’s holding it in his bare paw, though, so it can’t be that bad. Jules Rivera’s Mark Trail makes the most casual Pi Day reference. If the narrator hadn’t interrupted in the final panel no one would have reason to think this referenced anything. Mark Parisi’s Off The Mark is the other anthropomorphic numerals joke for the day. It’s built on the familiar fact that the digits of π go on forever. This is true for any integer base. In base π, of course, the representation of π is just “10”. But who uses that? And in base π, the number six would be something with infinitely many digits. There’s no fitting that in a one-panel comic, though. Doug Savage’s Savage Chickens is the one strip that wasn’t about food or anthropomorphized numerals. There is no practical reason to memorize digits of π, other than that you’re calculating something by hand and don’t want to waste time looking them up. In that case there’s not much call go to past 3.14. If you need more than about 3.14159, get a calculator to do it. But memorizing digits can be fun, and I will not underestimate the value of fun in getting someone interested in mathematics. For my part, I memorized π out to 3.1415926535787932, so that’s sixteen digits past the decimal. Always felt I could do more and I don’t know why I didn’t. The next couple digits are 8462, which has a nice descending-fifths cadence to it. The 626 following is a neat coda. My describing it this way may give you some idea to how I visualize the digits of π. They might help you, if you figure for some reason you need to do this. You do not, but if you enjoy it, enjoy it. Bianca Xunise’s Six Chix for the 15th ran a day late; Xunise only gets the comic on Tuesdays and the occasional Sunday. It returns to the food theme. And this brings me to the end of this year’s Pi Day comic strips. All of my Reading the Comics posts, past and someday future, should be at this link. And my various Pi Day essays should be here. Thank you for reading. Reading the Comics, June 3, 2020: Subjective Opinions Edition Thanks for being here for the last week before my All-2020 Mathematics A to Z starts. By the time this posts I should have decided on the A-topic, but I’m still up for B or C topics, if you’d be so kind as to suggest things. Bob Weber Jr’s Slylock Fox for the 1st of June sees Reeky Rat busted for speeding on the grounds of his average speed. It does make the case that Reeky Rat must have travelled faster than 20 miles per hour at some point. There’s no information about when he did it, just the proof that there must have been some time when he drove faster than the speed limit. One can find loopholes in the reasoning, but, it’s a daily comic strip panel for kids. It would be unfair to demand things like proof there’s no shorter route from the diner and that the speed limit was 20 miles per hour the whole way. Ted Shearer’s Quincy for the 1st originally ran the 7th of April, 1981. Quincy and his friend ponder this being the computer age, and whether they can let computers handle mathematics. Jef Mallett’s Frazz for the 2nd has the characters talk about how mathematics offers answers that are just right or wrong. Something without “subjective grading”. It enjoys that reputation. But it’s not so, and that’s obvious when you imagine grading. How would you grade an answer that has the right approach, but makes a small careless error? Or how would you grade an approach that doesn’t work, but that plausibly could? And how do you know that the approach wouldn’t work? Even in non-graded mathematics, we have subjectivity. Much of mathematics is a search for convincing arguments about some question. What we hope to be convinced of is that there is a sound logical argument making the same conclusions. Whether the argument is convincing is necessarily subjective. Yes, in principle, we could create a full deductive argument. It will take forever to justify every step from some axiom or definition or rule of inference. And even then, how do we know a particular step is justified? It’s because we think we understand what the step does, and how it conforms to one (or more) rule. That’s again a judgement call. (The grading of essays is also less subjective than you might think if you haven’t been a grader. The difference between an essay worth 83 points and one worth 85 points may be trivial, yes. But you will rarely see an essay that reads as an A-grade one day and a C-grade the next. This is not to say that essay grading is not subject to biases. Some of these are innocent, such as the way the grader’s mood will affect the grade. Or how the first several papers, or the last couple, will be less consistently graded than the ones done in the middle of the project. Some are pernicious, such as under-rating the work done by ethnic minority students. But these biases affect the way one would grade, say, the partial credit for an imperfectly done algebra problem too.) Mark Anderson’s Andertoons for the 3rd is the Mark Anderson’s Andertoons for the week. I could also swear that I’ve featured it here before. I can’t find it, if I have discussed this strip before. I may not have. Wavehead’s observing the difference between zero as an additive identity and its role in multiplication. Ryan Pagelow’s Buni for the 3rd fits into the anthropomorphic-numerals category of joke. It’s really more of a representation of the year as the four horsemen of the Apocalypse. Dan Collins’s Looks Good on Paper for the 3rd has a cook grilling a “Möbius Strip Steak”. It’s a good joke for putting on a mathematics instructor’s door. Doug Savage’s Savage Chickens for the 3rd has, as part of animal facts, the assertion that “llamas have basic math skills”. I don’t know of any specific research on llama mathematics skills. But animals do have mathematics skills. Often counting. Some amount of reasoning. Social animals often have an understanding of transitivity, as well, especially if the social groups have a pecking order. And this wraps up half of the past week’s mathematically-themed comic strips. I hope to have the rest in a Reading the Comics post at this link in a few days. Thanks for reading. Reading the Comics, February 8, 2020: Delta Edition With this essay, I finally finish the comic strips from the first full week of February. You know how these things happen. I’ll get to the comics from last week soon enough, at an essay gathered under this link. For now, some pictures with words: Art Sansom and Chip Sansom’s The Born Loser for the 7th builds on one of the probability questions people often use. That is the probability of an event, in the weather forecast. Predictions for what the weather will do are so common that it takes work to realize there’s something difficult about the concept. The weather is a very complicated fluid-dynamics problem. It’s almost certainly chaotic. A chaotic system is deterministic, but unpredictable, because to get a meaningful prediction requires precision that’s impossible to ever have in the real world. The slight difference between the number π and the number 3.1415926535897932 throws calculations off too quickly. Nevertheless, it implies that the “chance” of snow on the weekend means about the same thing as the “chance” that Valentinte’s Day was on the weekend this year. The way the system is set up implies it will be one or the other. This is a probability distribution, yes, but it’s a weird one. What we talk about when we say the “chance” of snow or Valentine’s on a weekend day is one of ignorance. It’s about our estimate that the true value of something is one of the properties we find interesting. Here, past knowledge can guide us. If we know that the past hundred times the weather was like this on Friday, snow came on the weekend less than ten times, we have evidence that suggests these conditions don’t often lead to snow. This is backed up, these days, by numerical simulations which are not perfect models of the weather. But they are ones that represent something very like the weather, and that stay reasonably good for several days or a week or so. And we have the question of whether the forecast is right. Observing this fact is used as the joke here. Still, there must be some measure of confidence in a forecast. Around here, the weather forecast is for a cold but not abnormally cold week ahead. This seems likely. A forecast that it was to jump into the 80s and stay there for the rest of February would be so implausible that we’d ignore it altogether. A forecast that it would be ten degrees (Fahrenheit) below normal, or above, though? We could accept that pretty easily. Proving a forecast is wrong takes work, though. Mostly it takes evidence. If we look at a hundred times the forecast was for a 10% chance of snow, and it actually snowed 11% of the time, is it implausible that the forecast was right? Not really, not any more than a coin coming up tails 52 times out of 100 would be suspicious. If it actually snowed 20% of the time? That might suggest that the forecast was wrong. If it snowed 80% of the time? That suggests something’s very wrong with the forecasting methods. It’s hard to say one forecast is wrong, but we can have a sense of what forecasters are more often right than others are. Doug Savage’s Savage Chickens for the 7th is a cute little bit about counting. Counting things out is an interesting process; for some people, hearing numbers said aloud will disrupt their progress. For others, it won’t, but seeing numbers may disrupt it instead. Niklas Eriksson’s Carpe Diem for the 8th is a bit of silliness about the mathematical sense of animals. Studying how animals understand number is a real science, and it turns up interesting results. It shouldn’t be surprising that animals can do a fair bit of counting and some geometric reasoning, although it’s rougher than even our untrained childhood expertise. We get a good bit of our basic mathematical ability from somewhere, because we’re evolved to notice some things. It’s silly to suppose that dogs would be able to state the Pythagorean Theorem, at least in a form that we recognize. But it is probably someone’s good research problem to work out whether we can test whether dogs understand the implications of the theorem, and whether it helps them go about dog work any. Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 8th speaks of the “Cinnamon Roll Delta Function”. The point is clear enough on its own. So let me spoil a good enough bit of fluff by explaining that it’s a reference to something. There is, lurking in mathematical physics, a concept called the “Dirac delta function”, named for that innovative and imaginative fellow Paul Dirac. It has some weird properties. Its domain is … well, it has many domains. The real numbers. The set of ordered pairs of real numbers, R2. The set of ordered triples of real numbers, R3. Basically any space you like, there’s a Dirac delta function for it. The Dirac delta function is equal to zero everywhere in this domain, except at one point, the “origin”. At that one function, though? There it’s equal to … Here we step back a moment. We really, really, really want to say that it’s infinitely large at that point, which is what Weinersmith’s graph shows. If we’re being careful, we don’t say that though. Because if we did say that, then we would lose the thing that we use the Dirac delta function for. The Dirac delta function, represented with δ, is a function with the property that for any set D, in the domain, that you choose to integrate over $\int_D \delta(x) dx = 1$ whenever the origin is inside the interval of integration D. It’s equal to 0 if the origin is not inside the interval of integration. This, whatever the set is. If we use the ordinary definitions for what it means to integrate a function, and say that the delta function is “infinitely big” at the origin, then this won’t happen; the integral will be zero everywhere. This is one of those cases where physicists worked out new mathematical concepts, and the mathematicians had to come up with a rationalization by which this made sense. This because the function is quite useful. It allows us, mathematically, to turn descriptions of point particles into descriptions of continuous fields. And vice-versa: we can turn continuous fields into point particles. It turns out we like to do this a lot. So if we’re being careful we don’t say just what the Dirac delta function “is” at the origin, only some properties about what it does. And if we’re being further careful we’ll speak of it as a “distribution” rather than a function. But colloquially, we think of the Dirac delta function as one that’s zero everywhere, except for the one point where it’s somehow “a really big infinity” and we try to not look directly at it. The sharp-eyed observer may notice that Weinersmith’s graph does not put the great delta spike at the origin, that is, where the x-axis represents zero. This is true. We can create a delta-like function with a singular spot anywhere we like by the process called “translation”. That is, if we would like the function to be zero everywhere except at the point $a$, then we define a function $\delta_a(x) = \delta(x - a)$ and are done. Translation is a simple step, but it turns out to be useful all the time. Thanks again for reading. See you soon. Reading the Comics, December 28, 2019: Running Out The 2010s Edition And here’s the last four comic strips from the final full week of 2019. I have already picked a couple strips for the end of December to say at least something about. Those I intend to wait for Sunday to review, though. And, as with the strips from this past Sunday, these are too slight for me to write much about. That’s all right. I don’t need the extra workload of thinking this week. Doug Savage’s Savage Chickens for the 26th uses a blackboard of mathematics (as part of “understanding of particle physics”) as symbolic of intelligence. I’m not versed enough in particle physics to say whether the expressions make sense. I’m inclined toward it, since the first line has an integral of the reciprocal of the distance between a point x and a point x’. That looks to me like a calculation of some potential energy-related stuff. Dana Simpson’s Phoebe and her Unicorn for the 27th uses “memorizing multiplication tables” as the sort of challenging and tedious task that a friend would not put another one through. The strip surprised me; I would have thought Phoebe the sort of kid who’d find multiplication tables, with their symmetry and teasing hints of structure (compare any number on the upper-left-to-lower-right diagonal to the numbers just up-and-right or down-and-left to it, for example), fascinating enough to memorize on their own. Leigh Rubin’s Rubes for the 27th has a rat-or-mouse showing off one of those exciting calculations about how many rats-or-mice could breed in a year if absolutely nothing limited their growth. These sorts of calculations are fun for getting to big numbers in pretty little time. They’re only the first, loosest pieces of a model for anything’s population, though. If you want to make any claims about “the” new decade, you have to say what you pick “the” to signify. Complete decades from the (proleptically defined) 1st of January, 1, is a compelling choice. “Years starting the 1st of January, 2020” is also a compelling choice. Decide your preference and you’ll decide your answer. Thank you for reading, this essay and this whole year. 2020 is, of course, a leap year, or “bissextile year” if you want to establish your reputation as a calendar freak. Good luck. Reading the Comics, September 14, 2019: Friday the 13th Edition The past week included another Friday the 13th. Several comic strips found that worth mention. So that gives me a theme by which to name this look over the comic strips. Charles Schulz’s Peanuts rerun for the 12th presents a pretty wordy algebra problem. And Peppermint Patty, in the grips of a math anxiety, freezing up and shutting down. One feels for her. Great long strings of words frighten anyone. The problem seems a bit complicated for kids Peppermint Patty’s and Franklin’s age. But the problem isn’t helping. One might notice, say, that a parent’s age will be some nice multiple of a child’s in a year or two. That in ten years a man’s age will be 14 greater than the combined age of their ages then? What imagination does that inspire? Grant Peppermint Patty her fears. The situation isn’t hopeless. It helps to write out just what know, and what we would like to know. At least what we would like to know if we’ve granted the problem worth solving. What we would like is to know the man’s age. That’s some number; let’s call it M. What we know are things about how M relates to his daughter’s and his son’s age, and how those relate to one another. Since we know several things about the daughter’s age and the son’s age it’s worth giving those names too. Let’s say D for the daughter’s age and S for the son’s. So. We know the son is three years older than the daughter. This we can write as $S = D + 3$. We know that in one year, the man will be six times as old as the daughter is now. In one year the man will be M + 1 years old. The daughter’s age now is D; six times that is 6D. So we know that $M + 1 = 6D$. In ten years the man’s age will be M + 10; the daughter’s age, D + 10; the son’s age, S + 10. In ten years, M + 10 will be 14 plus D + 10 plus S + 10. That is, $M + 10 = 14 + D + 10 + S + 10$. Or if you prefer, $M + 10 = D + S + 34$. Or even, $M = D + S + 24$. So this is a system of three equation, all linear, in three variables. This is hopeful. We can hope there will be a solution. And there is. There are different ways to find an answer. Since I’m grading this, you can use the one that feels most comfortable to you. The problem still seems a bit advanced for Peppermint Patty and Franklin. Julie Larson’s The Dinette Set rerun for the 13th has a bit of talk about a mathematical discovery. The comic is accurate enough for its publication. In 2008 a number known as M43112609 was proven to be prime. The number, 243,112,609 – 1, is some 12,978,189 digits long. It’s still the fifth-largest known prime number (as I write this). Prime numbers of the form 2N – 1 for some whole number N are known as Mersenne primes. These are named for Marin Mersenne, a 16th century French friar and mathematician. They’re a neat set of numbers. Each Mersenne prime matches some perfect number. Nobody knows whether there are finite or infinitely many Mersenne primes. Every even perfect number has a form that matches to some Mersenne prime. It’s unknown whether there are any odd perfect numbers. As often happens with number theory, the questions are easy to ask but hard to answer. But all the largest known prime numbers are Mersenne primes; they’re of a structure we can test pretty well. At least that electronic computers can test well; the last time the largest known prime was found by mere mechanical computer was 1951. The last time a non-Mersenne was the largest known prime was from 1989 to 1992, and before that, 1951. Mark Parisi’s Off The Mark for the 13th starts off the jokes about 13 for this edition. It’s also the anthropomorphic-numerals joke for the week. Doug Savage’s Savage Chickens for the 13th is a joke about the connotations of numbers, with (in the western tradition) 7 lucky and 13 unlucky. And many numbers just lack any particular connotation. T Shepherd’s Snow Sez for the 13th finishes off the unlucky-13 jokes. It observes that whatever a symbol might connote generally, your individual circumstances are more important. There are people for whom 13 is a good omen, or for whom Mondays are magnificent days, or for whom black cats are lucky. These are all the comics I can write paragraphs about. There were more comics mentioning mathematics last week. Here were some of them: Brian Walker, Greg Walker, and Chance Browne’s Hi and Lois for the 14th supposes that a “math nerd” can improve Thirsty’s golf game. Bill Amend’s FoxTrot Classics for the 14th, rerunning a strip from 1997, is a word problem joke. I needed to re-read the panels to see what Paige’s complaint was about. Greg Evans’s Luann Againn for the 14th, repeating a strip from 1991, is about prioritizing mathematics homework. I can’t disagree with putting off the harder problems. It’s good to have experience, and doing similar but easier problems can help one crack the harder ones. Jonathan Lemon’s Rabbits Against Magic for the 14th is the Rubik’s Cube joke for the week. And that’s my comic strips for the week. I plan to have the next Reading the Comics post here on Sunday. The A to Z series resumes tomorrow, all going well. I am seeking topics for the letters I through N, at this post. Thank you for reading, and for offering your thoughts. Reading the Comics, June 29, 2019: Pacing Edition These are the last of the comics from the final full week of June. Ordinarily I’d have run this on Tuesday or Thursday of last week. But I also had my monthly readership-report post and that bit about a particle physics simulator also to post. It better fit a posting schedule of something every two or three days to move this to Sunday. This is what I tell myself is the rationale for not writing things up faster. Ernie Bushmiller’s Nancy Classics for the 27th uses arithmetic as an economical way to demonstrate intelligence. At least, the ability to do arithmetic is used as proof of intelligence. Which shouldn’t surprise. The conventional appreciation for Ernie Bushmiller is of his skill at efficiently communicating the ideas needed for a joke. That said, it’s a bit surprising Sluggo asks the dog “six times six divided by two”; if it were just showing any ability at arithmetic “one plus one” or “two plus two” would do. But “six times six divided by two” has the advantage of being a bit complicated. That is, it’s reasonable Sluggo wouldn’t know it right away, and would see it as something only the brainiest would. But it’s not so complicated that Sluggo wouldn’t plausibly know the question. Eric the Circle for the 28th, this one by AusAGirl, uses “Non-Euclidean” as a way to express weirdness in shape. My first impulse was to say that this wouldn’t really be a non-Euclidean circle. A non-Euclidean geometry has space that’s different from what we’re approximating with sheets of paper or with boxes put in a room. There are some that are familiar, or roughly familiar, such as the geometry of the surface of a planet. But you can draw circles on the surface of a globe. They don’t look like this mooshy T-circle. They look like … circles. Their weirdness comes in other ways, like how the circumference is not π times the diameter. On reflection, I’m being too harsh. What makes a space non-Euclidean is … well, many things. One that’s easy to understand is to imagine that the space uses some novel definition for the distance between points. Distance is a great idea. It turns out to be useful, in geometry and in analysis, to use a flexible idea of of what distance is. We can define the distance between things in ways that look just like the Euclidean idea of distance. Or we can define it in other, weirder ways. We can, whatever the distance, define a “circle” as the set of points that are all exactly some distance from a chosen center point. And the appearance of those “circles” can differ. There are literally infinitely many possible distance functions. But there is a family of them which we use all the time. And the “circles” in those look like … well, at the most extreme, they look like squares. Others will look like rounded squares, or like slightly diamond-shaped circles. I don’t know of any distance function that’s useful that would give us a circle like this picture of Eric. But there surely is one that exists and that’s enough for the joke to be certified factually correct. And that is what’s truly important in a comic strip. Sandra Bell-Lundy’s Between Friends for the 29th is the Venn Diagram joke for the week. Formally, you have to read this diagram charitably for it to parse. If we take the “what” that Maeve says, or doesn’t say, to be particular sentences, then the intersection has to be empty. You can’t both say and not-say a sentence. But it seems to me that any conversation of importance has the things which we choose to say and the things which we choose not to say. And it is so difficult to get the blend of things said and things unsaid correct. And I realize that the last time Between Friends came up here I was similarly defending the comic’s Venn Diagram use. I’m a sympathetic reader, at least to most comic strips. And that was the conclusion of comic strips through the 29th of June which mentioned mathematics enough for me to write much about. There were a couple other comics that brought up something or other, though. Wulff and Morgenthaler’s WuMo for the 27th of June has a Rubik’s Cube joke. The traditional Rubik’s Cube has three rows, columns, and layers of cubes. But there’s no reason there can’t be more rows and columns and layers. Back in the 80s there were enough four-by-four-by-four cubes sold that I even had one. Wikipedia tells me the officially licensed cubes have gotten only up to five-by-five-by-five. But that there was a 17-by-17-by-17 cube sold, with prototypes for 22-by-22-by-22 and 33-by-33-by-33 cubes. This seems to me like a great many stickers to peel off and reattach. And two comic strips did ballistic trajectory calculation jokes. These are great introductory problems for mathematical physics. They’re questions about things people can observe and so have a physical intuition for, and yet involve mathematics that’s not too subtle or baffling. John Rose’s Barney Google and Snuffy Smith mentioned the topic the 28th of June. Doug Savage’s Savage Chickens used it the 28th also, because sometimes comic strips just line up like that. This and other Reading the Comics posts should be at this link. This includes, I hope, the strips of this past week, that is, the start of July, which should be published Tuesday. Thanks for reading at all. Reading the Comics, April 18, 2019: Slow But Not Stopped Week Edition The first, important, thing is that I have not disappeared or done something worse. I just had one of those weeks where enough was happening that something had to give. I could either write up stuff for my mathematics blog, or I could feel guilty about not writing stuff up for my mathematics blog. Since I didn’t have time to do both, I went with feeling guilty about not writing, instead. I’m hoping this week will give me more writing time, but I am fooling only myself. Second is that Comics Kingdom has, for all my complaining, gotten less bad in the redesign. Mostly in that the whole comics page loads at once, now, instead of needing me to click to “load more comics” every six strips. Good. The strips still appear in weird random orders, especially strips like Prince Valiant that only run on Sundays, but still. I can take seeing a vintage Boner’s Ark Sunday strip six unnecessary times. The strips are still smaller than they used to be, and they’re not using the decent, three-row format that they used to. And the archives don’t let you look at a week’s worth in one page. But it’s less bad, and isn’t that all we can ever hope for out of the Internet anymore? And finally, Comic Strip Master Command wanted to make this an easy week for me by not having a lot to write about. It got so light I’ve maybe overcompensated. I’m not sure I have enough to write about here, but, I don’t want to completely vanish either. Dave Whamond’s Reality Check for the 15th is … hm. Well, it’s not an anthropomorphic-numerals joke. It is some kind of wordplay, making concrete a common phrase about, and attitude toward, numbers. I could make the fussy difference between numbers and numerals here but I’m not sure anyone has the patience for that. Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 17th touches around mathematics without, I admit, necessarily saying anything specific. The angel(?) welcoming the man to heaven mentions creating new systems of mathematics as some fit job for the heavenly host. The discussion of creating self-consistent physics systems seems mathematical in nature too. I’m not sure whether saying one could “attempt” to create self-consistent physics is meant to imply that our universe’s physics are not self-consistent. To create a “maximally complex reality using the simplest possible constructions” seems like a mathematical challenge as well. There are important fields of mathematics built on optimizing, trying to create the most extreme of one thing subject to some constraints or other. I think the strip’s premise is the old, partially a joke, concept that God is a mathematician. This would explain why the angel(?) seems to rate doing mathematics or mathematics-related projects as so important. But even then … well, consider. There’s nothing about designing new systems of mathematics that ordinary mortals can’t do. Creating new physics or new realities is beyond us, certainly, but designing the rules for such seems possible. I think I understood this comic better then I had thought about it less. Maybe including it in this column has only made trouble for me. Doug Savage’s Savage Chickens for the 17th amuses me by making a strip out of a logic paradox. It’s not quite your “this statement is a lie” paradox, but it feels close to that, to me. To have the first chicken call it “Birthday Paradox” also teases a familiar probability problem. It’s not a true paradox. It merely surprises people who haven’t encountered the problem before. This would be the question of how many people you need to have in a group before there’s a 50 percent (75 percent, 99 percent, whatever you like) chance of at least one pair sharing a birthday. And I notice on Wikipedia a neat variation of this birthday problem. This generalization considers splitting people into two distinct groups, and how many people you need in each group to have a set chance of a pair, one person from each group, sharing a birthday. Apparently both a 32-person group of 16 women and 16 men, or a 49-person group of 43 women and six men, have a 50% chance of some woman-man pair sharing a birthday. Neat. Mark Parisi’s Off The Mark for the 18th sports a bit of wordplay. It’s built on how multiplication and division also have meanings in biology. … If I’m not mis-reading my dictionary, “multiply” meant any increase in number first, and the arithmetic operation we now call multiplication afterwards. Division, similarly, meant to separate into parts before it meant the mathematical operation as well. So it might be fairer to say that multiplication and division are words that picked up mathematical meaning. And if you thought this week’s pickings had slender mathematical content? Jef Mallett’s Frazz, for the 19th, just mentioned mathematics homework. Well, there were a couple of quite slight jokes the previous week too, that I never mentioned. Jenny Campbell’s Flo and Friends for the 8th did a Roman numerals joke. The rerun of Richard Thompson’s Richard’s Poor Almanac for the 11th had the Platonic Fir Christmas tree, rendered as a geometric figure. I’ve discussed the connotations of that before. And there we are. I hope to have some further writing this coming week. But if all else fails my next Reading the Comics essay, like all of them, should be at this link. Reading the Comics, April 10, 2019: Grand Avenue and Luann Want My Attention Edition So this past week has been a curious blend for the mathematically-themed comics. There were many comics mentioning some mathematical topic. But that’s because Grand Advenue and Luann Againn — reprints of early 90s Luann comics — have been doing a lot of schoolwork. There’s a certain repetitiveness to saying, “and here we get a silly answer to a story problem” four times over. But we’ll see what I do with the work. Mark Anderson’s Andertoons for the 7th is Mark Anderson’s Andertoons for the week. Very comforting to see. It’s a geometry-vocabulary joke, with Wavehead noticing the similar ends of some terms. I’m disappointed that I can’t offer much etymological insight. “Vertex”, for example, derives from the Latin for “highest point”, and traces back to the Proto-Indo-European root “wer-”, meaning “to turn, to bend”. “Apex” derives from the Latin for “summit” or “extreme”. And that traces back to the Proto-Indo-European “ap”, meaning “to take, to reach”. Which is all fine, but doesn’t offer much about how both words ended up ending in “ex”. This is where my failure to master Latin by reading a teach-yourself book on the bus during my morning commute for three months back in 2002 comes back to haunt me. There’s probably something that might have helped me in there. Mac King and Bill King’s Magic in a Minute for the 7th is an activity puzzle this time. It’s also a legitimate problem of graph theory. Not a complicated one, but still, one. Graph theory is about sets of points, called vertices, and connections between points, called edges. It gives interesting results for anything that’s networked. That shows up in computers, in roadways, in blood vessels, in the spreads of disease, in maps, in shapes. One common problem, found early in studying graph theory, is about whether a graph is planar. That is, can you draw the whole graph, all its vertices and edges, without any lines cross each other? This graph, with six vertices and three edges, is planar. There are graphs that are not. If the challenge were to connect each number to a 1, a 2, and a 3, then it would be nonplanar. That’s a famous non-planar graph, given the obvious name K3, 3. A fun part of learning graph theory — at least fun for me — is looking through pictures of graphs. The goal is finding K3, 3 or another one called K5, inside a big messy graph. Mike Thompson’s Grand Avenue for the 8th has had a week of story problems featuring both of the kid characters. Here’s the start of them. Making an addition or subtraction problem about counting things is probably a good way of making the problem less abstract. I don’t have children, so I don’t know whether they play marbles or care about them. The most recent time I saw any of my niblings I told them about the subtleties of industrial design in the old-fashioned Western Electric Model 2500 touch-tone telephone. They love me. Also I’m not sure that this question actually tests subtraction more than it tests reading comprehension. But there are teachers who like to throw in the occasional surprisingly easy one. Keeps students on their toes. Greg Evans’s Luann Againn for the 10th is part of a sequence showing Gunther helping Luann with her mathematics homework. The story started the day before, but this was the first time a specific mathematical topic was named. The point-slope form is a conventional way of writing an equation which corresponds to a particular line. There are many ways to write equations for lines. This is one that’s convenient to use if you know coordinates for one point on the line and the slope of the line. Any coordinates which make the equation true are then the coordinates for some point on the line. Doug Savage’s Savage Chickens for the 10th tosses in a line about logical paradoxes. In this case, using a classic problem, the self-referential statement. Working out whether a statement is true or false — its “truth value” — is one of those things we expect logic to be able to do. Some self-referential statements, logical claims about themselves, are troublesome. “This statement is false” was a good one for baffling kids and would-be world-dominating computers in science fiction television up to about 1978. Some self-referential statements seem harmless, though. Nobody expects even the most timid world-dominating computer to be bothered by “this statement is true”. It takes more than just a statement being about itself to create a paradox. And a last note. The blog hardly needs my push to help it out, but, sometimes people will miss a good thing. Ben Orlin’s Math With Bad Drawings just ran an essay about some of the many mathematics-themed comics that Hilary Price and Rina Piccolo’s Rhymes With Orange has run. The comic is one of my favorites too. Orlin looks through some of the comic’s twenty-plus year history and discusses the different types of mathematical jokes Price (with, in recent years, Piccolo) makes. Myself, I keep all my Reading the Comics essays at this link, and those mentioning some aspect of Rhymes With Orange at this link. Reading the Comics, April 5, 2019: The Slow Week Edition People reading my Reading the Comics post Sunday maybe noticed something. I mean besides my correct, reasonable complaining about the Comics Kingdom redesign. That is that all the comics were from before the 30th of March. That is, none were from the week before the 7th of April. The last full week of March had a lot of comic strips. The first week of April didn’t. So things got bumped a little. Here’s the results. It wasn’t a busy week, not when I filter out the strips that don’t offer much to write about. So now I’m stuck for what to post Thursday. Jason Poland’s Robbie and Bobby for the 3rd is a Library of Babel comic strip. This is mathematical enough for me. Jorge Luis Borges’s Library is a magnificent representation of some ideas about infinity and probability. I’m surprised to realize I haven’t written an essay specifically about it. I have touched on it, in writing about normal numbers, and about the infinite monkey theorem. The strip explains things well enough. The Library holds every book that will ever be written. In the original story there are some constraints. Particularly, all the books are 410 pages. If you wanted, say, a 600-page book, though, you could find one book with the first 410 pages and another book with the remaining 190 pages and then some filler. The catch, as explained in the story and in the comic strip, is finding them. And there is the problem of finding a ‘correct’ text. Every possible text of the correct length should be in there. So every possible book that might be titled Mark Twain vs Frankenstein, including ones that include neither Mark Twain nor Frankenstein, is there. Which is the one you want to read? Henry Scarpelli and Craig Boldman’s Archie for the 4th features an equal-divisions problem. In principle, it’s easy to divide a pizza (or anything else) equally; that’s what we have fractions for. Making them practical is a bit harder. I do like Jughead’s quick work, though. It’s got the slight-of-hand you expect from stage magic. Scott Hilburn’s The Argyle Sweater for the 4th takes place in an algebra class. I’m not sure what algebraic principle $7^4 \times 13^6$ demonstrates, but it probably came from somewhere. It’s 4,829,210. The exponentials on the blackboard do cue the reader to the real joke, of the sign reading “kick10 me”. I question whether this is really an exponential kicking situation. It seems more like a simple multiplication to me. But it would be harder to make that joke read clearly. Tony Cochran’s Agnes for the 5th is part of a sequence investigating how magnets work. Agnes and Trout find just … magnet parts inside. This is fair. It’s even mathematics. Thermodynamics classes teach one of the great mathematical physics models. This is about what makes magnets. Magnets are made of … smaller magnets. This seems like question-begging. Ultimately you get down to individual molecules, each of which is very slightly magnetic. When small magnets are lined up in the right way, they can become a strong magnet. When they’re lined up in another way, they can be a weak magnet. Or no magnet at all. How do they line up? It depends on things, including how the big magnet is made, and how it’s treated. A bit of energy can free molecules to line up, making a stronger magnet out of a weak one. Or it can break up the alignments, turning a strong magnet into a weak one. I’ve had physics instructors explain that you could, in principle, take an iron rod and magnetize it just by hitting it hard enough on the desk. And then demagnetize it by hitting it again. I have never seen one do this, though. This is more than just a physics model. The mathematics of it is … well, it can be easy enough. A one-dimensional, nearest-neighbor model, lets us describe how materials might turn into magnets or break apart, depending on their temperature. Two- or three-dimensional models, or models that have each small magnet affected by distant neighbors, are harder. And then there’s the comic strips that didn’t offer much to write about. Brian Basset’s Red and Rover for the 3rd, Liniers’s Macanudo for the 5th, Stephen Bentley’s Herb and Jamaal rerun for the 5th, and Gordon Bess’s Redeye rerun for the 5th all idly mention mathematics class, or things brought up in class. Doug Savage’s Savage Chickens for the 2nd is another more-than-100-percent strip. Richard Thompson’s Richard’s Poor Almanac for the 3rd is a reprint of his Christmas Tree guide including a fir that “no longer inhabits Euclidean space”. Mike Baldwin’s Cornered for the 31st depicts a common idiom about numbers. Eric the Circle for the 5th, by Rafoliveira, plays on the ∞ symbol. And that covers the mathematically-themed comic strips from last week. There are more coming, though. I’ll show them on Sunday. Thanks for reading. Reading the Comics, March 9, 2019: In Which I Explain Eleven Edition I thought I had a flood of mathematically-themed comic strips last week. On reflection, many of them were slight enough not to need further context. You’ll see in the paragraph of not-discussed strips at the end of this. What did rate discussion turned out to get more interesting to me the more I wrote about them. Stephen Beals’s Adult Children for the 6th uses mathematics as icon of things that are indisputably true. Two plus two equals four is a good example of such. If we take the ordinary meanings of ‘two’ and ‘plus’ and ‘equals’ and ‘four’ there’s no disputing it. The result follows from some uncontroversial-seeming axioms and a lot of deduction. By the rules of logic, the conclusion has to be true, whoever makes it. Even, for that matter, if nobody makes it. It’s difficult to imagine a universe in which nobody ever notices two plus two equals four. But we can imagine that there are mathematical truths that will never be noticed by anyone. (Here’s one. There is some largest finite whole number that any human-created project will ever use in any context. Consider the equation represented by “that number plus two equals (even bigger number)”.) But you see cards palmed there. What do we mean by ‘two’? Have we got a good definition? Might there be a different definition that’s more useful? Probably not, for ‘two’ anyway. But a part of mathematics, especially as a field develops, is working out what are the important concepts, and what their definitions should be. What a ‘function’ is, for example, went through a lot of debate and change over the 19th century. There is an elusiveness to facts, even in mathematics, where you’d think epistemology would be simpler. Frank Page’s Bob the Squirrel for the 6th continues the SAT prep questions from earlier in the week. There’s two more problems in shuffling around algebraic expressions here. The first one, problem 5, is probably easiest to do by eliminating wrong answers. $(x^2 y - 3y^2 + 5xy^2) - (-x^2 y + 3xy^2 - 3y^2)$ is a tedious mess. But look at just the $x^2 y$ terms: they have to add up to $2x^2 y$, so, the answer has to be either c or d. So next look at the $3y^2$ terms and oh, that’s nice. They add up to zero. The answer has to be c. If you feel like checking the $5xy^2$ terms, go ahead; that’ll offer some reassurance, if you do the addition correctly. The second one, problem 8, is probably easier to just think out. If $\frac{a}{b} = 2$ then there’s a lot of places to go. What stands out to me is that $4\frac{b}{a}$ has the reciprocal of $\frac{a}{b}$ in it. So, the reciprocal of $\frac{a}{b}$ has to equal the reciprocal of $2$. So $\frac{a}{b} = \frac{1}{2}$. And $4\frac{b}{a}$ is, well, four times $\frac{b}{a}$, so, four times one-half, or two. There’s other ways to go about this. In honestly, what I did when I looked at the problem was multiply both sides of $\frac{a}{b} = 2$ by $\frac{b}{a}$. But it’s harder to explain why that struck me as an obviously right thing to do. It’s got shortcuts I grew into from being comfortable with the more methodical approach. Someone who does a lot of problems like these will discover shortcuts. Rick Detorie’s One Big Happy for the 6th asks one of those questions you need to be a genius or a child to ponder. Why don’t the numbers eleven and twelve follow the pattern of the other teens, or for that matter of twenty-one and thirty-two, and the like? And the short answer is that they kind of do. At least, “eleven” and “twelve”, etymologists agree, derive from the Proto-Germanic “ainlif” and “twalif”. If you squint your mouth you can get from “ain” to “one” (it’s probably easier if you go through the German “ein” along the way). Getting from “twa” to “two” is less hard. If my understanding is correct, etymologists aren’t fully agreed on the “lif” part. But they are settled on it means the part above ten. Like, “ainlif” would be “one left above ten”. So it parses as one-and-ten, putting it in form with the old London-English preference for one-and-twenty or two-and-thirty as word constructions. It’s not hard to figure how “twalif” might over centuries mutate to “twelve”. We could ask why “thirteen” didn’t stay something more Old Germanic. My suspicion is that it amounts to just, well, it worked out like that. It worked out the same way in German, which switches to “-zehn” endings from 13 on. Lithuanian has all the teens end with “-lika”; Polish, similarly, but with “-&sacute;cie”. Spanish — not a Germanic language — has “custom” words for the numbers up to 15, and then switches to “diecis-” as a prefix to the numbers 6 through 9. French doesn’t switch to a systematic pattern until 17. (And no I am not going to talk about France’s 80s and 90s.) My supposition is that different peoples came to different conclusions about whether they needed ten, or twelve, or fifteen, or sixteen, unique names for numbers before they had to resort to systemic names. Here’s some more discussion of the teens, though, including some exploration of the controversy and links to other explanations. Doug Savage’s Savage Chickens for the 6th is a percentages comic. It makes reference to an old series of (American, at least) advertisements in which four out of five dentists would agree that chewing sugarless gum is a good thing. Shifting the four-out-of-five into 80% riffs is not just fun with tautologies. Percentages have this connotation of technical precision; 80% sounds like a more rigorously known number than “four out of five”. It doesn’t sound as scientific as “0.80”, quite. But when applied to populations a percentage seems less bizarre than a decimal. Oh, now, and what about comic strips I can’t think of anything much to write about? Ruben Bolling’s Super-Fun-Pak Comix for the 4th featured divisibility, in a panel titled “Fun Facts for the Obsessive-Compulsive”. Olivia James’s Nancy on the 6th was avoiding mathematics homework. Jonathan Mahood’s Bleeker: The Rechargeable Dog for the 7th has Skip avoiding studying for his mathematics test. Bob Scott’s Bear With Me for the 7th has Molly mourning a bad result on her mathematics test. (The comic strip was formerly known as Molly And The Bear, if this seems familiar but the name seems wrong.) These are all different comic strips, I swear. Bill Holbrook’s Kevin and Kell for the 8th has Rudy and Fiona in mathematics class. (The strip originally ran in 2013; Comics Kingdom has started running Holbrook’s web comic, but at several years’ remove.) And, finally, Alex Hallatt’s Human Cull for the 8th talks about “110%” as a phrase. I don’t mind the phrase, but the comic strip has a harder premise. And that finishes the comic strips from last week. But Pi Day is coming. I’ll be ready for it. Shall see you there. Reading the Comics, December 5, 2018: December 5, 2018 Edition And then I noticed there were a bunch of comic strips with some kind of mathematical theme on the same day. Always fun when that happens. Bill Holbrook’s On The Fastrack uses one of Holbrook’s common motifs. That’s the depicting as literal some common metaphor. in this case it’s “massaging the numbers”, which might seem not strictly mathematics. But while numbers are interesting, they’re also useful. To be useful they must connect to something we want to know. They need context. That context is always something of human judgement. If the context seems inappropriate to the listener, she thinks the presenter is massaging the numbers. If the context seems fine, we trust the numbers as showing something truth. Scott Hilburn’s The Argyle Sweater is a seasonal pun that couldn’t wait for a day closer to Christmas. I’m a little curious why not. It would be the same joke with any subject, certainly. The strip did make me wonder if Ebeneezer Scrooge, in-universe, might have taken calculus. This lead me to see that it’s a bit vague what, precisely, Scrooge, or Scrooge-and-Marley, did. The movies are glad to position him as having a warehouse, and importing and exporting things, and making and collecting on loans and whatnot. These are all trades that mathematicians would like to think benefit from knowing advanced mathematics. The logic of making loans implies attention be paid to compounding interest, risks, and expectation values, as well as projecting cash-flow a fair bit into the future. But in the original text he doesn’t make any stated loans, and the only warehouse anyone enters is Fezziwig’s. Well, the Scrooge and Marley sign stands “above the warehouse door”, but we only ever go in to the counting-house. And yes, what Scrooge does besides gather money and misery is irrelevant to the setting of the story. Teresa Burritt’s Dadaist strip Frog Applause uses knowledge of mathematics as an emblem of intelligence. “Multivariate analysis” is a term of art from statistics. It’s about measuring how one variable changes depending on two or more other variables. The goal is obvious: we know there are many things that influence anything of interest. Can we find what things have the strongest effects? The weakest effects? There are several ways we might mean “strongest” effect, too. It might mean that a small change in the independent variable produces a big change in the dependent one. Or it might mean that there’s very little noise, that a change in the independent variable produces a reliable change in the dependent one. Or we might have several variables that are difficult to measure precisely on their own, but with a combination that’s noticeable. The basic calculations for this look a lot like those for single-variable analysis. But there’s much more calculation. It’s more tedious, at least. My reading suggests that multivariate analysis didn’t develop much until there were computers cheap enough to do the calculations. Might be coincidence, though. Many machine-learning techniques can be described as multivariate analysis problems. Greg Evans’s Luann Againn is a Pi Day joke from before the time when Pi Day was a thing. Brad’s magazine flipping like that is an unusual bit of throwaway background humor for the comic strip. Doug Savage’s Savage Chickens is a bunch of shape jokes. Since I was talking about tiling the plane so recently the rhombus seemed on-point enough. I’m think the irregular heptagon shown here won’t tile the plane. But given how much it turns out I didn’t know, I wouldn’t want to commit to that. I’m working hard on a latter ‘X’ essay for my Fall 2018 Mathematics A To Z glossary. That should appear on Friday. And there should be another Reading the Comics post later this week, at this link. Reading the Comics, August 16, 2018: Recursive Edition This edition of Reading the Comics can be found at this link. Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 14th is a fractals joke. Benoit Mandelbrot became the centerpiece of the big fractals boom in pop mathematics in the 80s and 90s. This was thanks to a fascinating property of complex-valued numbers that he discovered and publicized. The Mandelbrot set is a collection of complex-valued numbers. It’s a border, properly, between two kinds of complex-valued numbers. This boundary has this fascinating shape that looks a bit like a couple kidney beans surrounded by lightning. That’s neat enough. What’s amazing, and makes this joke anything, is what happens if you look closely at this boundary. Anywhere on it. In the bean shapes or in the lightning bolts. You find little replicas of the original shape. Not precisely the original shape. No two of these replicas are precisely identical (except for the “complex conjugate”, that is, something near the number $-1 + 1 \imath$ has a mirror image near $-1 - 1 \imath$). None of these look precisely like the original shape. But they look extremely close to one another. They’re smaller, yes, and rotated relative to the original, and to other copies. But go anywhere on this boundary and there it is: the original shape, including miniature imperfect copies, all over again. The Mandelbrot Set itself — well, there are a bunch of ways to think of it. One is in terms of something called the Julia Set, named for Gaston Julia. In 1918 he published a massive paper about the iteration of rational functions. That is, start with some domain and a function rule; what’s the range? Now if we used that range as the domain again, and used the same rule for the function, what’s the range of that range? If we use the range-of-that-range as the domain for the same function rule, what’s the range-of-the-range-of-the-range? The particular function here has one free parameter, a single complex-valued number. Depending on what it is, the range-of-the-range-of-the-range-etc becomes a set that’s either one big blob or a bunch of disconnected blobs. The Mandelbrot Set is the locus of parameters separating the one-big-blob from the many-disconnected-blob outcomes. By the way, yes, Julia published this in 1918. The work was amazing. It was also forgotten. You can study this stuff analytically, but it’s hard. To visualize it you need to do incredible loads of computation. So this is why so much work lay fallow until the 1970s, when Mandelbrot could let computers do incredible loads of computation, and even draw some basic pictures. Doug Savage’s Savage Chickens for the 14th is another instance of the monkeys-at-typewriters joke. I’ve written about this and the history of the monkeys-at-typewriters bit recently enough to feel comfortable pointing people there. It’s interesting that monkeys should have a connotation of reliably random typewriting, while cats would be reliably not doing something. But that’s a cultural image that’s a little too far from being mathematics for me to spend 800 words discussing. Thom Bleumel’s Birdbrains for the 15th is a calendars joke. Numbers come into play since, well, it seems odd to try tracking large numbers of dates without some sense of arithmetic. Also, likely, without some sense of geometry. Calendars are much used to forecast coming events, such as New and Full Moons or the seasons. That takes basic understanding of how to locate things in the sky to do at all. It takes sophisticated understanding of how to locate things in the sky to do well. Scott Hilburn’s The Argyle Sweater for the 16th is the first anthropomorphic-numerals joke around here in like three days. Certainly, the scandalous thing is supposed to be these numbers multiplying out in public where anyone might see them. I wonder if any part of the scandal should be that multiplication like this has to include three partners: the 4, the 7, and the x. In algebra we get used to a convention in which we do without the ‘x’. Just placing one term next to another carries an implicit multiplication: ‘4a’ for ‘4 times a’. But that convention fails disastrously with numerals; what should we make of ’47’? We might write 4(7), or maybe (4)(7), to be clear. Or we might put a little centered dot between the two, $4 \cdot 7$. The ‘x’ by that point is reserved for “some variable whose value isn’t specified”. And it would be weird to write ‘4 times x times 7’. It wouldn’t be wrong; it’d just look weird. It would suggest you were trying to emphasize a point. I’ve probably done it in one of my long derivation-happy posts. Other essays about comic strips are at this link. When I’ve talked about Saturday Morning Breakfast Cereal I’ve tried to make sure it turns up at this link. Essays in which I’ve discussed Savage Chickens should be at this link. The times I’ve discussed Birdbrains should be at this link. And other essays describing The Argyle Sweater are at this link. Reading the Comics, March 17, 2018: Pi Day 2018 Edition So today I am trying out including images for all the mathematically-themed comic strips here. This is because of my discovery that some links even on GoComics.com vanish without warning. I’m curious how long I can keep doing this. Not for legal reasons. Including comics for the purpose of an educational essay about topics raised by the strips is almost the most fair use imaginable. Just because it’s a hassle copying the images and putting them up on WordPress.com and that’s even before I think about how much image space I have there. We’ll see. I might try to figure out a better scheme. Also in this batch of comics are the various Pi Day strips. There was a healthy number of mathematically-themed comics on the 14th of March. Many of those were just coincidence, though, with no Pi content. I’ll group the Pi Day strips together. Tom Batiuk’s Funky Winkerbean for the 2nd of April, 1972 is, I think, the first appearance of Funky Winkerbean around here. Comics Kingdom just started running the strip, as well as Bud Blake’s Tiger and Bill Hoest’s Lockhorns, from the beginning as part of its Vintage Comics roster. And this strip really belonged in Sunday’s essay, but I noticed the vintage comics only after that installment went to press. Anyway, this strip — possibly the first Sunday Funky Winkerbean — plays off a then-contemporary fear of people being reduced to numbers in the face of a computerized society. If you can imagine people ever worrying about something like that. The early 1970s were a time in American society when people first paid attention to the existence of, like, credit reporting agencies. Just what they did and how they did it drew a lot of critical examination. Josh Lauer’s recently published Creditworthy: a History of Consumer Surveillance and Financial Identity in America gets into this. Bob Scott’s Bear With Me for the 14th sees Molly struggling with failure on a mathematics test. Could be any subject and the story would go as well, but I suppose mathematics gets a connotation of the subject everybody has to study for, even the geniuses. (The strip used to be called Molly and the Bear. In either name this seems to be the first time I’ve tagged it, although I only started tagging strips by name recently.) Bud Fisher’s Mutt and Jeff rerun for the 14th is a rerun from sometime in 1952. I’m tickled by the problem of figuring out how many times Fisher and his uncredited assistants drew Mutt and Jeff. Mutt saying that the boss “drew us 14,436 times” is the number of days in 45 years, so that makes sense if he’s counting the number of strips drawn. The number of times that Mutt and Jeff were drawn is … probably impossible to calculate. There’s so many panels each strip, especially going back to earlier and earlier times. And how many panels don’t have Mutt or don’t have Jeff or don’t have either in them? Jeff didn’t appear in the strip until March of 1908, for example, four months after the comic began. (With a different title, so the comic wasn’t just dangling loose all that while.) Doug Savage’s Savage Chickens for the 14th is a collection of charts. Not all pie charts. And yes, it ran the 14th but avoids the pun it could make. I really like the tart charts, myself. And now for the Pi Day strips proper. Scott Hilburn’s The Argyle Sweater for the 14th starts the Pi Day off, of course, with a pun and some extension of what makes 3/14 get its attention. And until Hilburn brought it up I’d never thought about the zodiac sign for someone born the 14th of March, so that’s something. Mark Parisi’s Off The Mark for the 14th riffs on one of the interesting features of π, that it’s an irrational number. Well, that its decimal representation goes on forever. Rational numbers do that too, yes, but they all end in the infinite repetition of finitely many digits. And for a lot of them, that digit is ‘0’. Irrational numbers keep going on with more complicated patterns. π sure seems like it’s a normal number. So we could expect that any finite string of digits appears somewhere in its decimal expansion. This would include a string of digits that encodes any story you like, The Neverending Story included. This does not mean we might ever find where that string is. Michael Cavna’s Warped for the 14th combines the two major joke threads for Pi Day. Specifically naming Archimedes is a good choice. One of the many things Archimedes is famous for is finding an approximation for π. He’d worked out that π has to be larger than 310/71 but smaller than 3 1/7. Archimedes used an ingenious approach: we might not know the precise area of a circle given only its radius. But we can know the area of a triangle if we know the lengths of its legs. And we can draw a series of triangles that are enclosed by a circle. The area of the circle has to be larger than the sum of the areas of those triangles. We can draw a series of triangles that enclose a circle. The area of the circle has to be less than the sum of the areas of those triangles. If we use a few triangles these bounds are going to be very loose. If we use a lot of triangles these bounds can be tight. In principle, we could make the bounds as close together as we could possibly need. We can see this, now, as a forerunner to calculus. They didn’t see it as such at the time, though. And it’s a demonstration of what amazing results can be found, even without calculus, but with clever specific reasoning. Here’s a run-through of the process. John Zakour and Scott Roberts’s Working Daze for the 15th is a response to Dr Stephen Hawking’s death. The coincidence that he did die on the 14th of March made for an irresistibly interesting bit of trivia. Zakour and Roberts could get there first, thanks to working on a web comic and being quick on the draw. (I’m curious whether they replaced a strip that was ready to go for the 15th, or whether they normally work one day ahead of publication. It’s an exciting but dangerous way to go.) Reading the Comics, July 30, 2017: Not Really Mathematics edition It’s been a busy enough week at Comic Strip Master Command that I’ll need to split the results across two essays. Any other week I’d be glad for this, since, hey, free content. But this week it hits a busy time and shouldn’t I have expected that? The odd thing is that the mathematics mentions have been numerous but not exactly deep. So let’s watch as I make something big out of that. Mark Tatulli’s Heart of the City closed out its “Math Camp” storyline this week. It didn’t end up having much to do with mathematics and was instead about trust and personal responsibility issues. You know, like stories about kids who aren’t learning to believe in themselves and follow their dreams usually are. Since we never saw any real Math Camp activities we don’t get any idea what they were trying to do to interest kids in mathematics, which is a bit of a shame. My guess would be they’d play a lot of the logic-driven puzzles that are fun but that they never get to do in class. The story established that what I thought was an amusement park was instead a fair, so, that might be anywhere Pennsylvania or a couple of other nearby states. Rick Kirkman and Jerry Scott’s Baby Blues for the 25th sees Hammie have “another” mathematics worksheet accident. Could be any subject, really, but I suppose it would naturally be the one that hey wait a minute, why is he doing mathematics worksheets in late July? How early does their school district come back from summer vacation, anyway? Olivia Walch’s Imogen Quest for the 26th uses a spot of mathematics as the emblem for teaching. In this case it’s a bit of physics. And an important bit of physics, too: it’s the time-dependent Schrödinger Equation. This is the one that describes how, if you know the total energy of the system, and the rules that set its potential and kinetic energies, you can work out the function Ψ that describes it. Ψ is a function, and it’s a powerful one. It contains probability distributions: how likely whatever it is you’re modeling is to have a particle in this region, or in that region. How likely it is to have a particle with this much momentum, versus that much momentum. And so on. Each of these we find by applying a function to the function Ψ. It’s heady stuff, and amazing stuff to me. Ψ somehow contains everything we’d like to know. And different functions work like filters that make clear one aspect of that. Dan Thompson’s Brevity for the 26th is a joke about Sesame Street‘s Count von Count. Also about how we can take people’s natural aptitudes and delights and turn them into sad, droning unpleasantness in the service of corporate overlords. It’s fun. Steve Sicula’s Home and Away rerun for the 26th is a misplaced Pi Day joke. It originally ran the 22nd of April, but in 2010, before Pi Day was nearly so much a thing. Doug Savage’s Savage Chickens for the 26th proves something “scientific” by putting numbers into it. Particularly, by putting statistics into it. Understandable impulse. One of the great trends of the past century has been taking the idea that we only understand things when they are measured. And this implies statistics. Everything is unique. Only statistical measurement lets us understand what groups of similar things are like. Does something work better than the alternative? We have to run tests, and see how the something and the alternative work. Are they so similar that the differences between them could plausibly be chance alone? Are they so different that it strains belief that they’re equally effective? It’s one of science’s tools. It’s not everything which makes for science. But it is stuff easy to communicate in one panel. Neil Kohney’s The Other End for the 26th is really a finance joke. It’s about the ways the finance industry can turn one thing into a dazzling series of trades and derivative trades. But this is a field that mathematics colonized, or that colonized mathematics, over the past generation. Mathematical finance has done a lot to shape ideas of how we might study risk, and probability, and how we might form strategies to use that risk. It’s also done a lot to shape finance. Pretty much any major financial crisis you’ve encountered since about 1990 has been driven by a brilliant new mathematical concept meant to govern risk crashing up against the fact that humans don’t behave the way some model said they should. Nor could they; models are simplified, abstracted concepts that let hard problems be approximated. Every model has its points of failure. Hopefully we’ll learn enough about them that major financial crises can become as rare as, for example, major bridge collapses or major airplane disasters.
2023-03-27 04:27:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 27, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5254935026168823, "perplexity": 1297.1724805297217}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946637.95/warc/CC-MAIN-20230327025922-20230327055922-00237.warc.gz"}
http://openstudy.com/updates/52800891e4b022369a7cc3b7
Here's the question you clicked on: ## johnny101 Group Title Calc, help! How do I determine at what points y=x+2/cos x is continuous? one year ago one year ago • This Question is Closed 1. Decart where the function is defined 2. pgpilot326 what's the def of continuous? 3. Decart not at asymptote 4. johnny101 its not defined in text so yeah im assuming 5. pgpilot326 Definition in terms of limits of functions The function f is continuous at some point c of its domain if the limit of f(x) as x approaches c through the domain of f exists and is equal to f(c).[2] In mathematical notation, this is written as $\lim_{x \to c}{f(x)} = f(c).$ In detail this means three conditions: first, f has to be defined at c. Second, the limit on the left hand side of that equation has to exist. Third, the value of this limit must equal f(c). 6. Decart as x approaches what value is the function undefined 7. Decart take the derivative 8. johnny101 can you walk me through that step by step if possible? I missed the lecture and am somewhat lost 9. Decart you need to use the quotient rule and the derivative of cos is -sin 10. johnny101 f(x)/g(x)=l/m? 11. pgpilot326 the function is not defined when $x=\frac{ \pi }{ 2 }+k\pi\text{, where }k \in \mathbb{Z}$ 12. pgpilot326 because cos x will be 0 at those values of x and the function will not be deifned there. thus, the function will be discontinuous at those points 13. johnny101 so what you have above pi/2+ k(pi), how did you determine points off that? 14. pgpilot326 y = x is continuous for all real x. y = 2/x is continuous for all real x where x is not 0. -1 <= cos x <= 1 for all real x. thus, so long as cos x not = 0, your function will be continuous. 15. johnny101 ohhhhh. Ok I got it now. Thank you!
2014-11-24 03:03:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8216114044189453, "perplexity": 761.373454334662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380355.69/warc/CC-MAIN-20141119123300-00041-ip-10-235-23-156.ec2.internal.warc.gz"}
https://itectec.com/superuser/windows-is-it-possible-to-easily-find-the-command-prompt-equivalent-of-a-context-menu-item/
# Windows – Is it possible to easily find the Command Prompt equivalent of a context menu item For example, Comodo Internet Security has added two items to my right-click menu, "Scan with Comodo" and "Run in Comodo Container." I don't particularly want both of them to be there clogging up my context menu; I'm never going to use the scan (I prefer Kaspersky for file scanning), and I'm rarely going to use the virtual desktop. What I'd like to do is disable them via registry (which I know how to do) BUT then put an item for "Run in Comodo Container" in my send-to menu. I already know how to add such items by creating a shortcut containing Command Prompt arguments in shell:sendto. However, I cannot figure out what those arguments should be to make it serve the same function as the original Comodo context menu item. Does anyone know if there's a simple way to figure out the cmd equivalent of a context menu item created by an application? I'd like to be able to do this for several different apps' items, not just Comodo. I don't know if there's a universal way to essentially translate registry mumbo-jumbo into understandable cmd code, but Google has turned up nothing. Anyone know if this is possible? This can get tricky because there are multiple places and methods to add a context menu item. HKEY_CLASSES_ROOT in the Registry contains, among other things, context menu items and shell extension registrations. Some subkeys of that hive represent kinds of objects you see in Explorer. You might have to poke around to find where exactly your menu item is registered. Particularly interesting keys are: • * applies to all files • Directory applies to all directories when right-clicking on a folder item • The Background subkey of Directory applies to all directories when right-clicking in the background of the current folder • exefile applies to applications (EXE files) Some of those subkeys have a shell subkey which contains subkeys for shell-specific registrations. Registrations with a command subkey represent context menu items. On my system, for example, AC3 files have a "Play with VLC media player" context menu item that comes from this branch: HKEY_CLASSES_ROOT Subkey: ac3file Subkey: shell Subkey: PlayWithVLC Default value: Play with VLC media player Subkey: command Default value: "C:\Program Files (x86)\VideoLAN\VLC\vlc.exe" --started-from-file --no-playlist-enqueue "%1" The default value of the command subkey is the command line invoked when the item is clicked. %1 gets replaced with the file/directory the item was used on. Relevant HowToGeek article. Unfortunately, some don't have a command line, and are instead run through COM objects. Some context menu items don't have distinct Registry entries at all, and are instead added dynamically by shell extensions. Relevant shell extensions are under the shellex\ContextMenuHandlers branch of the file type key instead of shell. If clicking such items produces a new process, you might be able to use Process Explorer to see the command line used - just mouse over a process. If not, it may not be possible to emulate the menu item with the command line.
2021-10-19 08:13:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37949344515800476, "perplexity": 3121.9212896570484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585246.50/warc/CC-MAIN-20211019074128-20211019104128-00005.warc.gz"}
https://math.stackexchange.com/questions/863959/fermats-last-theorem-for-negative-n/863964
# Fermat's Last Theorem for Negative $n$ While studying Fermat's Last Theorem and Pythagorean triples, the following question occurred to me: For the equation $a^n+b^n=c^n$, where $n$ is a negative integer, a) does a solution exist, and b) if solutions exist, is there some analog to Fermat's Last Theorem for these parameters? I have made a few passing attempts at finding a solution and have come up empty handed, though I am no great mathematical mind and would not be shocked if I have missed even obvious answers. Thank you in advance for your aid. • You can multiply by $(abc)^n$ to get back into the case of positive powers. – Adam Hughes Jul 11 '14 at 5:02 • $3^{-1}+6^{-1} = 2^{-1}$ – JimmyK4542 Jul 11 '14 at 5:02 • @JimmyK4542 thank you for the solution. Now , does one exist for, say, $n<-2$? – Gotthold Jul 11 '14 at 5:11 • no, you'd have $$bc+ac=ab$$ – Adam Hughes Jul 11 '14 at 5:12 • @AdamHughes I'm sorry sir. – Gotthold Jul 11 '14 at 5:15 Let's suppose that $a^{-n}+b^{-n} = c^{-n}$ for some positive integers $a,b,c,n$. Then, multiply both sides by $a^nb^nc^n$ to get $b^nc^n + a^nc^n = a^nb^n$, i.e. $(bc)^n+(ac)^n=(ab)^n$. If $n \ge 3$, then this contradicts Fermat's Last Theorem. Hence, there are no solutions for $n \ge 3$. For $n = 1$, we have several solutions, one of which is $3^{-1}+6^{-1} = 2^{-1}$. For $n = 2$, we have several solutions, one of which is $15^{-2}+20^{-2} = 12^{-2}$. Fermat's theorem actually asks the above question for positive integers greater than two; so are you asking for negative $n$ less than negative two? Anyways, to answer your question consider the problem for $n = -1$. We have $$\frac{1}{a} + \frac{1}{b} = \frac{1}{c} = \frac{b}{ab} + \frac{a}{ab} = \frac{a+b}{ab}.$$ This means that if we have $(a+b) \vert ab$ then we can simply define $c=\frac{ab}{a+b}$. There exist many formulas for finding two distinct integers such that their sum divides their product (see link: Necessary and sufficient conditions for the sum of two numbers to divide their product). Ex: let $a=10$, $b=15$. We have $$\frac{1}{10}+\frac{1}{15} = \frac{1}{6}.$$ • Sir, thank you. Yes, I was asking about n values less than -2. – Gotthold Jul 11 '14 at 5:18 • For $n<-2$ you get $$(bc)^n+(ac)^n=(ab)^n$$ which is not solvable by Fermat's last theorem. – Adam Hughes Jul 11 '14 at 5:23
2019-09-20 16:41:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9243385791778564, "perplexity": 162.69218089397611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574050.69/warc/CC-MAIN-20190920155311-20190920181311-00420.warc.gz"}
https://axiomsofchoice.org/countable_base_for_a_topology
Countable base for a topology Set context $\langle X,T\rangle$ … topological space definiendum $B\in$ it postulate $B$ … base postulate $B$ … countable
2023-03-31 10:15:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9649487137794495, "perplexity": 6932.636397864106}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00493.warc.gz"}
http://comp.text.tex.narkive.com/JnyIiKGA/koma-versions-for-use-in-headline-running-header-toc-and-pdf-toc
Discussion: koma: versions for use in headline, running header, toc, and pdf-toc (too old to reply) Ronnie Marksch 2017-10-18 15:57:28 UTC Permalink Raw Message Hi, is there a standard way to provide different texts for * the headline, * the running header, * the table of contents, and * the pdf table of contents? I guess koma could offer something out of the box? Best regards, Ronnie \documentclass{scrbook} \usepackage{lipsum} \usepackage{hyperref} \newcommand{\TOC}[2]{#2} \begin{document} \begingroup \renewcommand{\TOC}[2]{#1} \tableofcontents \endgroup \chapter[\protect\TOC{C1-TOC1\\C1-TOC2}{\texorpdfstring{C1-HEAD1 C1-HEAD2}{C1-PDF-TOC}}]{C1-Normal1\\C1-Normal2} \lipsum \section[\protect\TOC{S1-TOC1\\S1-TOC2}{\texorpdfstring{S1-HEAD1 S1-HEAD2}{S1-PDF-TOC}}]{S1-Normal1\\S1-Normal2} \lipsum \end{document} Dr Engelbert Buxbaum 2017-10-24 16:31:55 UTC Permalink Raw Message In article <os7tla$106v$***@gioia.aioe.org>, ***@yahoo.de says... Post by Ronnie Marksch Hi, is there a standard way to provide different texts for * the headline, * the running header, * the table of contents, and * the pdf table of contents? try \chapter{short title]{A very, very long title that goes over several lines and completely messes up running headers and table of content and does not increase readability anyway} -- DIN EN ISO 9241-13: 9.5.3 Error messages should convey what is wrong, what corrective actions can be taken, and the cause of the error. Loading...
2018-05-21 16:51:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4364084303379059, "perplexity": 14176.944281537237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864461.53/warc/CC-MAIN-20180521161639-20180521181639-00493.warc.gz"}
https://jonwebb.dev/notes/priority-queue-adt/
Priority queues A priority queue is an abstract data type operating similarly to a queue , except that elements are inserted with an associated priority. When elements are removed (dequeued), they are served in the order of their priority, rather than in order of insertion. Common operations Name Description size() gets the number of elements stored in the queue peek() gets the highest-priority element of the queue dequeue() removes and returns the highest-priority item from the queue enqueue(p, e) inserts element e into the queue with priority p Binary heap implementation Priority queues are commonly implemented via heaps , which allow access the the highest priority item in $\mathcal{O}(1)$-time, and insertion and deletion from the queue in $\mathcal{O}(\log n)$-time.
2023-02-06 02:09:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24973459541797638, "perplexity": 2904.199285391567}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500303.56/warc/CC-MAIN-20230206015710-20230206045710-00740.warc.gz"}
https://zbmath.org/?q=an%3A1016.58013
## Conformal lower bounds for the Dirac operator of embedded hypersurfaces.(English)Zbl 1016.58013 Author’s abstract: We find sharp lower bounds for the first nonnegative eigenvalue of the classical intrinsic Dirac operator of a compact hypersurface bounding a domain in a Riemannian spin manifold. These estimates are given in terms of scalar (spectral) conformal invariants of the enclosed domain which are involved in the solution of the Yamabe problem. ### MSC: 58J50 Spectral problems; spectral geometry; scattering theory on manifolds 53C20 Global Riemannian geometry, including pinching Full Text:
2022-08-11 12:38:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8194869756698608, "perplexity": 860.1168260486859}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571284.54/warc/CC-MAIN-20220811103305-20220811133305-00793.warc.gz"}
https://testbook.com/question-answer/the-resistances-of-the-four-arms-of-a-wheatstone-b--62ff4d7719b2837cad19db60
# The resistances of the four arms of a Wheatstone bridge are as follows: AB = 100, BC = 10, CD = 4, DA = 50 ohms. A 20-ohm resistance galvanometer is connected between BD. If there is a potential difference of 10 V across the AC, then find the value of the current flowing through the galvanometer. This question was previously asked in UPRVUNL TG 2 (Electrician) Shift 2 Official Paper (Held on 14 July 2021) View all UPRVUNL TG2 Papers > 1. 0.05 A 2. 0.5 A 3. 5.1 mA 4. 5 A ## Answer (Detailed Solution Below) Option 3 : 5.1 mA Free General Knowledge Free Mock Test 3.5 K Users 20 Questions 20 Marks 15 Mins ## Detailed Solution Calculation: Applying KVL in loop 1: $$10=100(I_1-I_2)+10(I_1-I_3)$$ $$10=110I_1-100I_2-10I_3$$..........(i) Applying KVL in loop 2: $$100(I_2-I_1)+50I_2+20(I_2-I_3)=0$$ $$-100I_1+170I_2-20I_3=0$$ ...........(ii) Applying KVL in loop 3: $$20(I_3-I_2)+4I_3+10(I_3-I_1)=0$$ $$-10I_1-20I_2+34I_3=0$$ ............(iii) Solving equations (i), (ii), and (iii), we get: I1 = 0.276 A I= 0.184 A I3 = 0.189 A The current through the galvanometer is: I = 0.189 - 0.184 I = 0.005 A = 5 mA Latest UPRVUNL TG2 Updates Last updated on Feb 3, 2023 UPRVUNL TG2 result announced for the Computer Based Test held on 21st and 22nd December 2022. The Uttar Pradesh Rajya Vidyut Utpadan Nigam Limited (UPRVUNL) had released 190 vacancies for the post of UPRVUNL TG2. Only ITI candidates are eligible to apply for the exam. It is a golden opportunity for many job seekers, with a basic UPRVUNL TG2 Salary of Rs. 28,000. To prepare well for the exam practice using the UPRVUNL TG2 Previous Year Papers.
2023-03-31 12:38:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3365428149700165, "perplexity": 7254.625833224588}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00780.warc.gz"}
https://akozbohatnutoetz.firebaseapp.com/13619/33170.html
# Dy dx vs zlúčenina In calculus, Leibniz's notation, named in honor of the 17th-century German philosopher and mathematician Gottfried Wilhelm Leibniz, uses the symbols dx and dy to represent infinitely small (or infinitesimal) increments of x and y, respectively, just as Δx and Δy represent finite increments of x and y, respectively. Nonlinear, one or more turning points. dy/dx = anx n-1. Derivative is a function, actual slope depends upon location (i.e. value of x) y = sums or differences of 2 functions y = f(x) + g xy, dy dx, y 0, etc. can be used. If the variable t represents time then D t f can be written f˙. Feb 08, 2020 · From an outsider’s perspective dY/dX is likely a cryptic name. It’s mathematics— “dY/dX” is notation for the "derivative of Y with respect to X." Now that we know that, it probably comes as no surprise that dY/dX handles derivatives products, but it’s DeFi. More specifically, dY/dX offers margin trading, for ETH, DAI and USDC. dx x n= nx 1 General Power Rule: d dx y(x) n= ny 1 y0(x) , due to chain rule: d dx y n= d dy y dy dx = nyn 1 y0(x) d dy y n= ny 1 is not the same as d dx y = nyn 1 y0(x) In d dy yn = nyn 1 the variable of di erentiation is y (i.e. d dy) the same as the variable in yn so the simple power rule is used. ## dy/dx : is the gradient of the tangent at a point on the curve y=f(x) Δy/Δx : is the gradient of a line through two points on the curve y=f(x) δy/δx is the gradient of the line between two ponts on the curve y=f(x) which are close together We start by calling the function "y": y = f(x) 1. Add Δx. When x increases by Δx, then y increases by Δy : y + Δy = f(x + Δx) 2. Subtract the Two Formulas On the other hand, the pullback of the density $\sigma\,dx\,dy$ is $$\alpha^*(\sigma\,dx\,dy) = (\alpha^*\sigma)\,|\det J|\,du\,dv.$$ The absolute value of the determinant reflects the fact that we don’t care about orientation and we have $\int_R \alpha^*(\sigma\,dx\,dy)=\int_{\alpha(R)}\sigma\,dx\,dy$ without requiring that $\alpha$ be Please subscribe for more calculus tutorials and share my videos to help my channel grow! ### 2014-12-14 The domain of these variables may take on a particular geometrical significance if the differential is regarded as a particular differential form , or analytical significance if the differential is regarded as a linear approximation to the increment of a function. 2008-03-20 On the other hand, the pullback of the density $\sigma\,dx\,dy$ is $$\alpha^*(\sigma\,dx\,dy) = (\alpha^*\sigma)\,|\det J |\,du\,dv.$$ The absolute value of the determinant reflects the fact that we don’t care about orientation and we have $\int_R \alpha^*(\sigma\,dx\,dy)=\int_{\alpha(R)}\sigma\,dx\,dy$ without requiring that $\alpha$ be orientation-preserving as we did for the integral of a 2010-01-18 If (dy/dx)=sin(x+y)+cos(x+y), y(0)=0, then tan (x+y/2)= (A) ex - 1 (B) (ex-1/2) (C) 2(ex - 1) (D) 1 - ex. Check Answer and Solution for above questi dy/dx is a limit in which y represents the dependent variable and x the independent variable. Since it is a limit, technically it is not a fraction. Share. Improve this answer. The question gives us dy/dt and we have to find dx/dt. 2009-03-07 2018-08-01 But dy/dx and and dx/dy are not fractions, they are the result of processes - specifically, limiting processes. Formally, we define: which means "let delta-x go to 0 and consider the limit of the ratio of (delta y)/(delta x)" and: which means "let delta-y go to 0 and consider the limit of the ratio of (delta x)/(delta y)" Note that each of these involves a different limiting process - the 2014-12-14 In Introduction to Derivatives (please read it first!) we looked at how to do a derivative using differences and limits.. Newton and Leibniz independently invented calculus around the  The precise meaning of the variables dy and dx depends on the context of the application and the required level of mathematical rigor. The domain of these  In calculus, Leibniz's notation, named in honor of the 17th-century German philosopher and mathematician Gottfried Wilhelm Leibniz, uses the symbols dx and dy to  If y is a function of x, Leibnitz represents the derivative by dy/dx instead of our y'. This notation has advantages and disadvantages. It is first important to understand  Here we look at doing the same thing but using the "dy/dx" notation (also called Leibniz's notation) instead of limits. slope delta x and delta y. We start by calling  Jul 9, 2020 This calculus video tutorial discusses the basic idea behind derivative notations such as dy/dx, d/dx, dy/dt, dx/dt, and d/dy.My Website: If this is equal to zero, 3x 2 - 27 = 0 Hence x 2 - 9 = 0 (dividing by 3) So (x + 3)(x - 3) = 0 dy/dx = 0. Slope = 0; y = linear function . y = ax + b. Straight line. dy/dx = a. is it acceptable to prove dy/dx * dx/dy=1 in the same way as the chain rule is proved, ie like this: 1= deltay/deltax * deltax/deltay where delta represents the greek letter delta reperesenting a small but finite change in the quantity take limits of both sides as deltax goes to 0 1=dy/dx * dx/dy is this an acceptable proof? I've just started reading through Calculus Made Easy by Silvanus Thompson and am trying to solidify the concept of differentials in my mind before progressing too far through the text. In Chapter 1 Why is dy/dx a correct way to notate the derivative of cosine or any specific function for that matter? If I only wrote dy/dx on a piece of paper and asked somebody  Apr 13, 2017 The symbol dydx. The derivative is taken with respect to the independent variable. The dependent variable is on top and the independent variable is the bottom. $\frac{dy}{dx} = \frac{d}{dx}(f(x))$ where $x$is the independent variable. The precise meaning of the variables dy and dx depends on the context of the application and the required level of mathematical rigor. 1 pi na indické rupie 200 dolárov na euro dnes austrálskych dolárov na mexické peso držiaci iphone x png
2023-03-29 03:48:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8891817331314087, "perplexity": 926.1151141267964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00306.warc.gz"}
https://gateoverflow.in/3286/gate-it-2008-question-25
6,460 views In how many ways can $b$ blue balls and $r$ red balls be distributed in $n$ distinct boxes? 1. $\frac{(n+b-1)!\,(n+r-1)!}{(n-1)!\,b!\,(n-1)!\,r!}$ 2. $\frac{(n+(b+r)-1)!}{(n-1)!\,(n-1)!\,(b+r)!}$ 3. $\frac{n!}{b!\,r!}$ 4. $\frac{(n + (b + r) - 1)!} {n!\,(b + r - 1)}$ the formula for number of ways in which “n” identical items can be shared among r people  is c(n+r-1  , r-1) = c(n+r-1, n+r-1-(r-1))                                                        [as c(n,r)=c(n, n-r)] =c(n+r-1, n) here in question it was given b identical blue balls therefore in our formulae, n=b and and r=n i.e, c(b+n-1, b) same with n=r.. but can some one explain how we derived forumlae for c(n+r-1  , r-1)….. edited I tried pick and move strategy i got option D as answer :( edit i figure it out what mistake i was doing takeaway  is read all the comment in gateoverflow it will give you so many different way to think about a single problem. $r$ red balls can be distributed into $n$-distinct boxes in $C(n+r-1,r) = \frac{\left(n+r-1\right)!}{\left(n-1\right)! r!}$ $b$ blue balls can be distributed in $C(n+b-1,b) = \frac{\left(n+b-1\right)!}{\left(n-1\right)! b!}$ By product rule total ways are  $\frac{\left(n+b-1\right)! \left(n+r-1\right)! }{\left(n-1\right)! b!\left(n-1\right)! r!}$ by my doubt:   C(n-1+r,r)  this formula is applicable for the “distinct balls in the box problem” but in question it is not mentioned that the balls are distinct……… can anyone clarify my doubt plsss…………………….;;;;;;;;;;;;; This is formula is applicable if you have r identical objects and n distinct boxes. Here the problem is divided into two parts, first dividing the indistinguishable blue balls then dividing the indistinguishable red balls in n boxes respectively For $b$ Blue Balls: $Box_1 + Box_2+Box_3+..........+Box_n =b$ its : $^{n+b-1}C_b$ For $r$ Red Balls: $Box_1 + Box_2+Box_3+..........+Box_n =r$ its : $^{n+r-1}C_r$ Total Ways= $^{n+b-1}C_b$* $^{n+r-1}C_r$ (since blue and red balls are independent) which is option A. In this type of questions, why are we not considering (b+r) balls into n distinct boxes ??? Instead of that we're doing r balls into n boxes * b balls into n boxes. What's the logic behind this doing so???? x1+x2+x3+.....Xn=b so c(n+b-1,b) x1+x2+x3+.....xn=r so c(n+r-1,r) so  multiplying both option a matches
2023-02-01 21:22:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.838762640953064, "perplexity": 1577.4439262779974}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499953.47/warc/CC-MAIN-20230201211725-20230202001725-00124.warc.gz"}
http://www.reference.com/browse/mars-yellow
Definitions # Mars [mahrz] Mars is the fourth planet from the Sun in the Solar System. The planet is named after Mars, the Roman god of war. It is also referred to as the "Red Planet" because of its reddish appearance. Mars is a terrestrial planet with a thin atmosphere, having surface features reminiscent both of the impact craters of the Moon and the volcanoes, valleys, deserts and polar ice caps of Earth. It is the site of Olympus Mons, the highest known mountain in the Solar System, and of Valles Marineris, the largest canyon. Furthermore, in June 2008 three articles published in Nature presented evidence of an enormous impact crater in Mars' northern hemisphere, 10 600 km long by 8 500 km wide, or roughly four times larger than the largest impact crater yet discovered, the South Pole-Aitken basin. In addition to its geographical features, Mars’ rotational period and seasonal cycles are likewise similar to those of Earth. Until the first flyby of Mars by Mariner 4 in 1965, many speculated that there might be liquid water on the planet's surface. This was based on observations of periodic variations in light and dark patches, particularly in the polar latitudes, which looked like seas and continents, while long, dark striations were interpreted by some observers as irrigation channels for liquid water. These straight line features were later proven not to exist and were instead explained as optical illusions. Still, of all the planets in the Solar System other than Earth, Mars is the most likely to harbor liquid water, and perhaps life. Water, in the state of ice, was found by the Phoenix Mars Lander on July 31, 2008. Mars is currently host to three functional orbiting spacecraft: Mars Odyssey, Mars Express, and Mars Reconnaissance Orbiter. This is more than any planet in the Solar System except Earth. The surface is also home to the two Mars Exploration Rovers (Spirit and Opportunity), the lander Phoenix, and several inert landers and rovers that either failed or completed missions. Geological evidence gathered by these and preceding missions suggests that Mars previously had large-scale water coverage, while observations also indicate that small geyser-like water flows have occurred during the past decade. Observations by NASA's Mars Global Surveyor show evidence that parts of the southern polar ice cap have been receding. Mars has two moons, Phobos and Deimos, which are small and irregularly shaped. These may be captured asteroids, similar to 5261 Eureka, a Martian Trojan asteroid. Mars can be seen from Earth with the naked eye. Its apparent magnitude reaches −2.9, a brightness surpassed only by Venus, the Moon, and the Sun, though most of the time Jupiter will appear brighter to the naked eye than Mars. ## Physical characteristics Mars has approximately half the radius of Earth. It is less dense than Earth, having about 15% of Earth's volume and 11% of the mass. Its surface area is only slightly less than the total area of Earth's dry land. While Mars is larger and more massive than Mercury, Mercury has a higher density. This results in a slightly stronger gravitational force at Mercury's surface. Mars is also roughly intermediate in size, mass, and surface gravity between Earth and Earth's Moon (the Moon is about half the diameter of Mars, whereas Earth is twice; the Earth is about ten times more massive than Mars, and the Moon ten times less massive). The red-orange appearance of the Martian surface is caused by iron(III) oxide, more commonly known as hematite, or rust. ### Geology Based on orbital observations and the examination of the Martian meteorite collection, the surface of Mars appears to be composed primarily of basalt. Some evidence suggests that a portion of the Martian surface is more silica-rich than typical basalt, and may be similar to andesitic rocks on Earth; however, these observations may also be explained by silica glass. Much of the surface is deeply covered by a fine iron(III) oxide dust that has the consistency of talcum powder. Although Mars has no intrinsic magnetic field, observations show that parts of the planet's crust have been magnetized and that alternating polarity reversals of its dipole field have occurred. This paleomagnetism of magnetically susceptible minerals has properties that are very similar to the alternating bands found on the ocean floors of Earth. One theory, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands demonstrate plate tectonics on Mars 4 billion years ago, before the planetary dynamo ceased to function and caused the planet's magnetic field to fade away. Current models of the planet's interior imply a core region about 1,480 kilometres in radius, consisting primarily of iron with about 14–17% sulfur. This iron sulfide core is partially fluid, and has twice the concentration of the lighter elements than exist at Earth's core. The core is surrounded by a silicate mantle that formed many of the tectonic and volcanic features on the planet, but now appears to be inactive. The average thickness of the planet's crust is about 50 km, with a maximum thickness of 125 km. Earth's crust, averaging 40 km, is only a third as thick as Mars’ crust relative to the sizes of the two planets. The geological history of Mars can be split into many epochs, but the following are the three main ones: • Noachian epoch (named after Noachis Terra): Formation of the oldest extant surfaces of Mars, 3.8 billion years ago to 3.5 billion years ago. Noachian age surfaces are scarred by many large impact craters. The Tharsis bulge volcanic upland is thought to have formed during this period, with extensive flooding by liquid water late in the epoch. • Hesperian epoch (named after Hesperia Planum): 3.5 billion years ago to 1.8 billion years ago. The Hesperian epoch is marked by the formation of extensive lava plains. • Amazonian epoch (named after Amazonis Planitia): 1.8 billion years ago to present. Amazonian regions have few meteorite impact craters but are otherwise quite varied. Olympus Mons formed during this period along with lava flows elsewhere on Mars. A major geological event occurred on Mars on February 19, 2008, and was caught on camera by the Mars Reconnaissance Orbiter. Images capturing a spectacular avalanche of materials thought to be fine grained ice, dust, and large blocks are shown to have detached from a 700-meter high cliff. Evidence of the avalanche is present in the dust clouds left above the cliff afterwards. Recent studies support a theory, first proposed in the 1980s, that Mars was struck by an Pluto-sized meteor about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. ### Soil In June, 2008, the Phoenix Lander returned data showing Martian soil to be slightly alkaline and containing vital nutrients such as magnesium, sodium, potassium and chloride, all of which are necessary for living things to grow. Scientists compared the soil near Mars' north pole to that of backyard gardens on Earth, saying it could be suitable for plants such as asparagus. However, in August, 2008, the Phoenix Lander conducted simple chemistry experiments, mixing water from Earth with Martian soil in an attempt to test its pH, and discovered traces of the salt perchlorate. Its presence, if confirmed, would appear to make the soil more exotic than previously believed. Further testing is necessary to eliminate any possibility of the perchlorate readings being influenced by terrestrial sources which may have migrated from the spacecraft, either into samples or into the instrumentation. ### Hydrology Liquid water cannot exist on the surface of Mars with its present low atmospheric pressure, except at the lowest elevations for short periods but water ice is in no short supply, with two polar ice caps made largely of ice. In March 2007, NASA announced that the volume of water ice in the south polar ice cap, if melted, would be sufficient to cover the entire planetary surface to a depth of 11 metres. Additionally, an ice permafrost mantle stretches down from the pole to latitudes of about 60°. Much larger quantities of water are thought to be trapped underneath Mars's thick cryosphere, only to be released when the crust is cracked through volcanic action. The largest such release of liquid water is thought to have occurred when the Valles Marineris formed early in Mars's history, enough water being released to form the massive outflow channels. A smaller but more recent event of the same kind may have occurred when the Cerberus Fossae chasm opened about 5 million years ago, leaving a supposed sea of frozen ice still visible today on the Elysium Planitia centered at Cerberus Palus. However, the morphology of this region is more consistent with the ponding of lava flows causing a superficial similarity to ice flows. These lava flows probably draped the terrain established by earlier catastrophic floods of Athabasca Valles. Significantly rough surface texture at decimeter (dm) scales, thermal inertia comparable to that of the Gusev plains, and hydrovolcanic cones are consistent with the lava flow hypothesis. Furthermore, the stoichiometric mass fraction of H2O in this area to tens of centimeter depths is only ~4%, easily attributable to hydrated minerals and inconsistent with the presence of near-surface ice. More recently the high resolution Mars Orbiter Camera on the Mars Global Surveyor has taken pictures which give much more detail about the history of liquid water on the surface of Mars. Despite the many giant flood channels and associated tree-like network of tributaries found on Mars there are no smaller scale structures that would indicate the origin of the flood waters. It has been suggested that weathering processes have denuded these, indicating the river valleys are old features. Higher resolution observations from spacecraft like Mars Global Surveyor also revealed at least a few hundred features along crater and canyon walls that appear similar to terrestrial seepage gullies. The gullies tend to be in the highlands of the southern hemisphere and to face the Equator; all are poleward of 30° latitude. The researchers found no partially degraded (i.e. weathered) gullies and no superimposed impact craters, indicating that these are very young features. In a particularly striking example (see image) two photographs, taken six years apart, show a gully on Mars with what appears to be new deposits of sediment. Michael Meyer, the lead scientist for NASA's Mars Exploration Program, argues that only the flow of material with a high liquid water content could produce such a debris pattern and colouring. Whether the water results from precipitation, underground or another source remains an open question. However, alternative scenarios have been suggested, including the possibility of the deposits being caused by carbon dioxide frost or by the movement of dust on the Martian surface. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. Nevertheless, some of the evidence believed to indicate ancient water basins and flows has been negated by higher resolution studies taken at resolution about 30 cm by the Mars Reconnaissance Orbiter. ### Geography Although better remembered for mapping the Moon, Johann Heinrich Mädler and Wilhelm Beer were the first "areographers". They began by establishing once and for all that most of Mars’ surface features were permanent, and determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Rather than giving names to the various markings, Beer and Mädler simply designated them with letters; Meridian Bay (Sinus Meridiani) was thus feature "a. Today, features on Mars are named from a number of sources. Large albedo features retain many of the older names, but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). Mars’ equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line in 1830 for their first maps of Mars. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen for the definition of 0.0° longitude to coincide with the original selection. Since Mars has no oceans and hence no 'sea level', a zero-elevation surface or mean gravity surface also had to be selected. Zero altitude is defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and is about 0.6% of the sea level surface pressure on Earth (.006 atm). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. Research in 2008 has presented evidence regarding a theory proposed in 1980 postulating that, four billion years ago, the northern hemisphere of Mars was struck by an object one-tenth to two-thirds the size of the Moon. If validated, this would make Mars' northern hemisphere the site of an impact crater 10 600 km long by 8 500 km wide, or roughly the area of Europe, Asia, and Australia combined, surpassing the South Pole-Aitken basin as the largest impact crater in the Solar System. The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian 'continents' and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major. The shield volcano, Olympus Mons (Mount Olympus), at 26 km is the highest known mountain in the Solar System. It is an extinct volcano in the vast upland region Tharsis, which contains several other large volcanoes. It is over three times the height of Mount Everest which in comparison stands at only 8.848 km. Mars is also scarred by a number of impact craters: a total of 43 000 craters with a diameter of 5 km or greater have been found. The largest confirmed of these is the Hellas impact basin, a light albedo feature clearly visible from Earth. Due to the smaller mass of Mars, the probability of an object colliding with the planet is about half that of the Earth. However, Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is also more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. In spite of this, there are far fewer craters on Mars compared with the Moon because Mars's atmosphere provides protection against small meteors. Some craters have a morphology that suggests the ground was wet when the meteor impacted. The large canyon, Valles Marineris (Latin for Mariner Valleys, also known as Agathadaemon in the old canal maps), has a length of 4000 km and a depth of up to 7 km. The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 km long and nearly 2 km deep. Valles Marineris was formed due to the swelling of the Tharis area which caused the crust in the area of Valles Marineris to collapse. Another large canyon is Ma'adim Vallis (Ma'adim is Hebrew for Mars). It is 700 km long and again much bigger than the Grand Canyon with a width of 20 km and a depth of 2 km in some places. It is possible that Ma'adim Vallis was flooded with liquid water in the past. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the Arsia Mons volcano. The caves, named Dena, Chloe, Wendy, Annie, Abbey, Nikki and Jeanne after loved ones of their discoverers, are collectively known as the "seven sisters. Cave entrances measure from 100 m to 252 m wide and they are believed to be at least 73 m to 96 m deep. Because light does not reach the floor of most of the caves, it is likely that they extend much deeper than these lower estimates and widen below the surface. Dena is the only exception; its floor is visible and was measured to be 130 m deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Some researchers have suggested that this protection makes the caves good candidates for future efforts to find liquid water and signs of life. Mars has two permanent polar ice caps: the northern one at Planum Boreum and the southern one at Planum Australe. ### Atmosphere Mars lost its magnetosphere 4 billion years ago, so the solar wind interacts directly with the Martian ionosphere, keeping the atmosphere thinner than it would otherwise be by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected these ionised atmospheric particles trailing off into space behind Mars. The atmosphere of Mars is now relatively thin. Atmospheric pressure on the surface varies from around 30 Pa (0.03 kPa) on Olympus Mons to over 1155 Pa (1.155 kPa) in the depths of Hellas Planitia, with a mean surface level pressure of 600 Pa (0.6 kPa). This is less than 1% of the surface pressure on Earth (101.3 kPa). Mars's mean surface pressure equals the pressure found 35 km above the Earth's surface. The scale height of the atmosphere, about 11 km, is higher than Earth's (6 km) due to the lower gravity. Mars' gravity is only about 38% of the surface gravity on Earth. The atmosphere on Mars consists of 95% carbon dioxide, 3% nitrogen, 1.6% argon, and contains traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 µm in diameter which give the Martian sky a tawny color when seen from the surface. Several researchers claim to have detected methane in the Martian atmosphere with a concentration of about 10 ppb by volume. Since methane is an unstable gas that is broken down by ultraviolet radiation, typically lasting about 340 years in the Martian atmosphere, its presence would indicate a current or recent source of the gas on the planet. Volcanic activity, cometary impacts, and the presence of methanogenic microbial life forms are among possible sources. It was recently pointed out that methane could also be produced by a non-biological process called serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars. During a pole's winter, it lies in continuous darkness, chilling the surface and causing 25–30% of the atmosphere to condense out into thick slabs of CO2 ice (dry ice). When the poles are again exposed to sunlight, the frozen CO2 sublimes, creating enormous winds that sweep off the poles as fast as 400 km/h. These seasonal actions transport large amounts of dust and water vapor, giving rise to Earth-like frost and large cirrus clouds. Clouds of water-ice were photographed by the Opportunity rover in 2004. ### Climate Of all the planets, Mars's seasons are the most Earth-like, due to the similar tilts of the two planets' rotational axes. However, the lengths of the Martian seasons are about twice those of Earth's, as Mars’ greater distance from the Sun leads to the Martian year being about two Earth years in length. Martian surface temperatures vary from lows of about −140 °C (−220 °F) during the polar winters to highs of up to 20 °C (68 °F) in summers. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure, and the low thermal inertia of Martian soil. If Mars had an Earth-like orbit, its seasons would be similar to Earth's because its axial tilt is similar to Earth's. However, the comparatively large eccentricity of the Martian orbit has a significant effect. Mars is near perihelion when it is summer in the southern hemisphere and winter in the north, and near aphelion when it is winter in the southern hemisphere and summer in the north. As a result, the seasons in the southern hemisphere are more extreme and the seasons in the northern are milder than would otherwise be the case. The summer temperatures in the south can be up to 30 °C (54 °F) warmer than the equivalent summer temperatures in the north. Mars also has the largest dust storms in our Solar System. These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase the global temperature. The polar caps at both poles consist primarily of water ice. However, there is dry ice present on their surfaces. Frozen carbon dioxide (dry ice) accumulates as a thin layer about one metre thick on the north cap in the northern winter only, while the south cap has a permanent dry ice cover about eight metres thick. The northern polar cap has a diameter of about 1000 kilometres during the northern Mars summer, and contains about 1.6 million cubic kilometres of ice, which if spread evenly on the cap would be 2 kilometres thick. (This compares to a volume of 2.85 million cubic kilometres for the Greenland ice sheet.) The southern polar cap has a diameter of 350 km and a thickness of 3 km. The total volume of ice in the south polar cap plus the adjacent layered deposits has also been estimated at 1.6 million cubic kilometres. Both polar caps show spiral troughs, which are believed to form as a result of differential solar heating, coupled with the sublimation of ice and condensation of water vapor. Both polar caps shrink and regrow following the temperature fluctuation of the Martian seasons. ## Orbit and rotation Mars’ average distance from the Sun is roughly 230 million km (1.5 AU) and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. Mars's axial tilt is 25.19 degrees, which is similar to the axial tilt of the Earth. As a result, Mars has seasons like the Earth, though on Mars they are nearly twice as long given its longer year. Mars passed its perihelion in June 2007 and its aphelion in May 2008. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury shows greater eccentricity. However, it is known that in the past Mars has had a much more circular orbit than it does currently. At one point 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. The Mars cycle of eccentricity is 96 000 Earth years compared to the Earth's cycle of 100 000 years. However, Mars also has a much longer cycle of eccentricity with a period of 2.2 million Earth years, and this overshadows the 96 000-year cycle in the eccentricity graphs. For the last 35 000 years Mars' orbit has been getting slightly more eccentric because of the gravitational effects of the other planets. The closest distance between the Earth and Mars will continue to mildly decrease for the next 25 000 years. The image to the left shows a comparison between Mars and Ceres, a dwarf planet in the Asteroid Belt, as seen from the ecliptic pole, while the image to the right is as seen from the ascending node. The segments of orbits below the ecliptic are plotted in darker colors. The perihelia (q) and aphelia (Q) are labelled with the date of the nearest passage. ## Moons Mars has two tiny natural moons, Phobos and Deimos, which orbit very close to the planet and are thought to be captured asteroids. Both satellites were discovered in 1877 by Asaph Hall, and are named after the characters Phobos (panic/fear) and Deimos (terror/dread) who, in Greek mythology, accompanied their father Ares, god of war, into battle. Ares was known as Mars to the Romans. From the surface of Mars, the motions of Phobos and Deimos appear very different from that of our own moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit—where the orbital period would match the planet's period of rotation — rises as expected in the east but very slowly. Despite the 30 hour orbit of Deimos, it takes 2.7 days to set in the west as it slowly falls behind the rotation of Mars, then just as long again to rise. Because Phobos' orbit is below synchronous altitude, the tidal forces from the planet Mars are gradually lowering its orbit. In about 50 million years it will either crash into Mars’ surface or break up into a ring structure around the planet. It is not well understood how or when Mars came to capture its two moons. Both have circular orbits, very near the equator, which is very unusual in itself for captured objects. Phobos's unstable orbit would seem to point towards a relatively recent capture. There is no known mechanism for an airless Mars to capture a lone asteroid, so it is likely that a third body was involved — however, asteroids as large as Phobos and Deimos are rare, and binaries rarer still, outside the asteroid belt. ## Life The current understanding of planetary habitability—the ability of a world to develop and sustain life — favors planets that have liquid water on their surface. This requires that the orbit of a planet lie within a habitable zone, which for the Sun is currently occupied by Earth. Mars orbits half an astronomical unit beyond this zone and this, along with the planet's thin atmosphere, causes water to freeze on its surface. The past flow of liquid water, however, demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface would have been too salty and acidic to support life. The lack of a magnetosphere and extremely thin atmosphere of Mars are a greater challenge: the planet has little heat transfer across its surface, poor insulation against bombardment and the solar wind, and insufficient atmospheric pressure to retain water in a liquid form (water instead sublimates to a gaseous state). Mars is also nearly, or perhaps totally, geologically dead; the end of volcanic activity has stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there is still unclear. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites, and had some apparently positive results, including a temporary increase of CO2 production on exposure to water and nutrients. However this sign of life was later disputed by many scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A re-analysis of the now 30-year-old Viking data, in light of modern knowledge of extremophile forms of life, has suggested that the Viking tests were also not sophisticated enough to detect these forms of life. The tests may even have killed a (hypothetical) life form. Tests conducted by the Phoenix Mars Lander have shown that the soil has a very alkaline pH and it contains magnesium, sodium, potassium and chloride. The soil nutrients may be able to support life, but life would still have to be shielded from the intense ultraviolet light. At the Johnson space center lab organic compounds have been found in the meteorite ALH84001, which is supposed to have come from Mars. They concluded that these were deposited by primitive life forms extant on Mars before the meteorite was blasted into space by a meteor strike and sent on a 15 million-year voyage to Earth. Also, small quantities of methane and formaldehyde recently detected by Mars orbiters are both claimed to be hints for life, as these chemical compounds would quickly break down in the Martian atmosphere. It is possible that these compounds may be replenished by volcanic or geological means such as serpentinization. ## Exploration Dozens of spacecraft, including orbiters, landers, and rovers, have been sent to Mars by the Soviet Union, the United States, Europe, and Japan to study the planet's surface, climate, and geology. Roughly two-thirds of all spacecraft destined for Mars have failed in one manner or another before completing or even beginning their missions. While this high failure rate can be ascribed to technical problems, enough have either failed or lost communications for causes unknown for some to search for other explanations. Examples include an Earth-Mars "Bermuda Triangle", a Mars Curse, or even the long-standing NASA in-joke, the "Great Galactic Ghoul" that feeds on Martian spacecraft. ### Past missions The first successful fly-by mission to Mars was NASA's Mariner 4, launched in 1964. The first successful objects to land on the surface were two Soviet probes, Mars 2 and Mars 3 from the Mars probe program, launched in 1971, but both lost contact within seconds of landing. Then came the 1975 NASA launches of the Viking program, which consisted of two orbiters, each having a lander; both landers successfully touched down in 1976. Viking 1 remained operational for six years, Viking 2 for three. The Viking landers relayed the first color pictures of Mars and also mapped the surface of Mars so well that the images are still sometimes used to this day. The Soviet probes Phobos 1 and 2 were sent to Mars in 1988 to study Mars and its two moons. Phobos 1 lost contact on the way to Mars. Phobos 2, while successfully photographing Mars and Phobos, failed just before it was set to release two landers on Phobos's surface. Following the 1992 failure of the Mars Observer orbiter, NASA launched the Mars Global Surveyor in 1996. This mission was a complete success, having finished its primary mapping mission in early 2001. Contact was lost with the probe in November 2006 during its third extended program, spending exactly 10 operational years in space. Only a month after the launch of the Surveyor, NASA launched the Mars Pathfinder, carrying a robotic exploration vehicle Sojourner, which landed in the Ares Vallis on Mars. This mission was also successful, and received much publicity, partially due to the many images that were sent back to Earth. ### Current missions In 2001 NASA launched the successful Mars Odyssey orbiter, which is still in orbit as of March 2008, and the ending date has been extended to September 2008. Odyssey's Gamma Ray Spectrometer detected significant amounts of hydrogen in the upper metre or so of Mars's regolith. This hydrogen is thought to be contained in large deposits of water ice. In 2003, the ESA launched the Mars Express craft, consisting of the Mars Express Orbiter and the lander Beagle 2. Beagle 2 failed during descent and was declared lost in early February 2004. In early 2004 the Planetary Fourier Spectrometer team announced it had detected methane in the Martian atmosphere. ESA announced in June 2006 the discovery of aurorae on Mars. Also in 2003, NASA launched the twin Mars Exploration Rovers named Spirit (MER-A) and Opportunity (MER-B). Both missions landed successfully in January 2004 and have met or exceeded all their targets. Among the most significant scientific returns has been conclusive evidence that liquid water existed at some time in the past at both landing sites. Martian dust devils and windstorms have occasionally cleaned both rovers' solar panels, and thus increased their lifespan. On August 12, 2005 the NASA Mars Reconnaissance Orbiter probe was launched toward the planet, arriving in orbit on March 10, 2006 to conduct a two-year science survey. The orbiter will map the Martian terrain and weather to find suitable landing sites for upcoming lander missions. It also contains an improved telecommunications link to Earth, with more bandwidth than all previous missions combined. The Mars Reconnaissance Orbiter snapped the first image of a series of active avalanches near the planet's north pole, scientists said March 3, 2008. The most recent mission to Mars is the NASA Phoenix Mars lander, which launched August 4, 2007 and arrived on the north polar region of Mars on May 25, 2008. The lander has a robotic arm with a 2.5 m reach and capable of digging a meter into the Martian soil. The lander has a microscopic camera capable of resolving to one-thousandth the width of a human hair, and discovered a substance at its landing site on June 15, 2008, which was confirmed to be water ice on June 20. The Dawn spacecraft will fly by Mars in February 2009 for a gravity assist on its way to investigate Vesta and then Ceres. ### Future missions Phoenix will be followed by the Mars Science Laboratory in 2009, a bigger, faster (90 m/h), and smarter version of the Mars Exploration Rovers. Experiments include a laser chemical sampler that can deduce the make-up of rocks at a distance of 13 m. The joint Russian and Chinese Phobos-Grunt sample-return mission, to return samples of Mars's moon Phobos, is scheduled for a 2009 launch. In 2013 the ESA plans to launch its first Rover to Mars, the ExoMars rover will be capable of drilling 2 m into the soil in search of organic molecules. The Finnish-Russian MetNet mission will consist of sending tens of small landers on the Martian surface in order to establish a wide-spread surface observation network to investigate the planet's atmospheric structure, physics and meteorology. A precursor mission using 1–2 landers is scheduled for launch in 2009 or 2011 One possibility is a piggyback launch on the Russian Phobos Grunt mission. Other launches will take place in the launch windows extending to 2019. Manned Mars exploration by the United States has been explicitly identified as a long-term goal in the Vision for Space Exploration announced in 2004 by US President George W. Bush. NASA and Lockheed Martin have begun work on the Orion spacecraft, formerly the Crew Exploration Vehicle, which is currently scheduled to send a human expedition to Earth's moon by 2020 as a stepping stone to an expedition to Mars thereafter. The European Space Agency hopes to land humans on Mars between 2030 and 2035. This will be preceded by successively larger probes, starting with the launch of the ExoMars probe and a Mars Sample Return Mission. On September 28, 2007, NASA administrator Michael D. Griffin stated that NASA aims to put a man on Mars by 2037: in 2057, we should be celebrating 20 years of man on Mars. On September 15, 2008, NASA announced MAVEN, a robotic mission to provide information about Mars' atmosphere. ### Astronomy on Mars With the existence of various orbiters, landers, and rovers, it is now possible to study astronomy from the Martian skies. The Earth and the Moon are easily visible while Mars’ moon Phobos appears about one third the angular diameter of the full Moon as it appears from Earth. On the other hand Deimos appears more or less star-like, and appears only slightly brighter than Venus does from Earth. There are also various phenomena well-known on Earth that have now been observed on Mars, such as meteors and auroras. A transit of the Earth as seen from Mars will occur on November 10, 2084. There are also transits of Mercury and transits of Venus, and the moon Deimos is of sufficiently small angular diameter that its partial "eclipses" of the Sun are best considered transits (see Transit of Deimos from Mars). ## Viewing To the naked eye, Mars usually appears a distinct yellow, orange, or reddish color, and varies in brightness more than any other planet as seen from Earth over the course of its orbit. The apparent magnitude of Mars varies from +1.8 at conjunction to as high as −2.9 at perihelic opposition. When farthest away from the Earth, it is more than seven times as far from the latter as when it is closest. When least favourably positioned, it can be lost in the Sun's glare for months at a time. At its most favourable times — which occur twice every 32 years, alternately at 15 and 17-year intervals, and always between late July and late September — Mars shows a wealth of surface detail to a telescope. Especially noticeable, even at low magnification, are the polar ice caps. The point of Mars’ closest approach to the Earth is known as opposition. The length of time between successive oppositions, or the synodic period, is 780 days. Because of the eccentricities of the orbits, the times of opposition and minimum distance can differ by up to 8.5 days. The minimum distance varies between about 55 and 100 million km due to the planets' elliptical orbits. The next Mars opposition will occur on January 29, 2010. As Mars approaches opposition it begins a period of retrograde motion, which means it will appear to move backwards in a looping motion with respect to the background stars. ### 2003 closest approach On August 27, 2003, at 9:51:13 UT, Mars made its closest approach to Earth in nearly 60 000 years: 55 758 006 km (). This occurred when Mars was one day from opposition and about three days from its perihelion, making Mars particularly easy to see from Earth. The last time it came so close is estimated to have been on September 12, 57 617 BC, the next time being in 2287. However, this record approach was only very slightly closer than other recent close approaches. For instance, the minimum distance on August 22, 1924 was , and the minimum distance on August 24, 2208 will be . The orbital changes of Earth and Mars are making the approaches nearer: the 2003 record will be bettered 22 times by the year 4000. ### Historical observations The history of observations of Mars is marked by the oppositions of Mars, when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars which occur about every 15–17 years, and are distinguished because Mars is close to perihelion, making it even closer to Earth. Aristotle was among the first known writers to describe observations of Mars, noting that, as it passed behind the Moon, it was farther away than was originally believed. The only occultation of Mars by Venus observed was that of October 3, 1590, seen by M. Möstlin at Heidelberg. In 1609, Mars was viewed by Galileo, who was first to see it via telescope. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. In September 1877, a perihelic opposition of Mars occurred on September 5. In that year, Italian astronomer Giovanni Schiaparelli, then in Milan, used a 22 cm telescope to help produce the first detailed map of Mars. These maps notably contained features he called canali, which were later shown to be an optical illusion. These canali were supposedly long straight lines on the surface of Mars to which he gave names of famous rivers on Earth. His term, which means 'channels' or 'grooves', was popularly mistranslated in English as canals. Influenced by the observations, the orientalist Percival Lowell founded an observatory which had a 300 and 450 mm telescope. The observatory was used for the exploration of Mars during the last good opportunity in 1894 and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were also found by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summer) in combination with the canals lead to speculation about life on Mars, and it was a long held belief that Mars contained vast seas and vegetation. The telescope never reached the resolution required to give proof to any speculations. However, as bigger telescopes were used, fewer long, straight canali were observed. During an observation in 1909 by Flammarion with a 840 mm telescope, irregular patterns were observed, but no canali were seen. Even in the 1960s articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. Detailed scenarios for the metabolism and chemical cycles for a functional ecosystem have been published. It was not until spacecraft visited the planet during NASA's Mariner missions in the 1960s that these myths were dispelled. The results of the Viking life-detection experiments started an intermission in which the hypothesis of a hostile, dead planet was generally accepted. Some maps of Mars were made using the data from these missions, but it was not until the Mars Global Surveyor mission, launched in 1996 and operated until late 2006, that complete, extremely detailed maps were obtained. These maps are now available online. On July 31, 2008, NASA announced that they have discovered water on the planet. ## Mars in culture ### Historical connections Mars is named after the Roman god of war. In Babylonian astronomy, the planet was named after Nergal, their deity of fire, war, and destruction, most likely due to the planet's reddish appearance. When the Greeks equated Nergal with their god of war, Ares, they named the planet Ἄρεως ἀστἡρ (Areos aster), or "star of Ares". Then, following the identification of Ares and Mars, it was translated into Latin as stella Martis, or "star of Mars", or simply Mars. The Greeks also called the planet Πυρόεις Pyroeis meaning "fiery". In Hindu mythology, Mars is known as Mangala (मंगल). The planet is also called Angaraka in Sanskrit, after the celibate god of war, who possesses the signs of Aries and Scorpio, and teaches the occult sciences. The planet was known by the Egyptians as "Ḥr Dšr";;;; or "Horus the Red". The Hebrews named it Ma'adim (מאדים) — "the one who blushes"; this is where one of the largest canyons on Mars, the Ma'adim Vallis, gets its name. It is known as al-Mirrikh in Arabic, and Merih in Turkish. In Urdu and Persian it is written as مریخ and known as "Merikh". The etymology of al-Mirrikh is unknown. Ancient Persians named it Bahram, the Zoroastrian god of faith and it is written as بهرام. Ancient Turks called it Sakit. The Chinese, Japanese, Korean and Vietnamese cultures refer to the planet as 火星, or the fire star, a name based on the ancient Chinese mythological cycle of Five elements. Its symbol, derived from the astrological symbol of Mars, is a circle with a small arrow pointing out from behind. It is a stylized representation of a shield and spear used by the Roman God Mars. Mars in Roman mythology was the God of War and patron of warriors. This symbol is also used in biology to describe the male sex, and in alchemy to symbolise the element iron which was considered to be dominated by Mars whose characteristic red colour is coincidentally due to iron oxide. ♂ occupies Unicode position U+2642. ### Intelligent "Martians" The popular idea that Mars was populated by intelligent Martians exploded in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In 1899 while investigating atmospheric radio noise using his receivers in his Colorado Springs lab, inventor Nikola Tesla observed repetitive signals that he later surmised might have been radio communications coming from another planet, possibly Mars. In a 1901 interview Tesla said: It was some time afterward when the thought flashed upon my mind that the disturbances I had observed might be due to an intelligent control. Although I could not decipher their meaning, it was impossible for me to think of them as having been entirely accidental. The feeling is constantly growing on me that I had been the first to hear the greeting of one planet to another. Tesla's theories gained support from Lord Kelvin who, while visiting the United States in 1902, was reported to have said that he thought Tesla had picked up Martian signals being sent to the United States. However, Kelvin "emphatically" denied this report shortly before departing America: "What I really said was that the inhabitants of Mars, if there are any, were doubtless able to see New York, particularly the glare of the electricity. In a New York Times article in 1901, Edward Charles Pickering, director of the Harvard College Observatory, said that they had received a telegram from Lowell Observatory in Arizona that seemed to confirm that Mars was trying to communicate with the Earth. Early in December 1900, we received from Lowell Observatory in Arizona a telegram that a shaft of light had been seen to project from Mars (the Lowell observatory makes a specialty of Mars) lasting seventy minutes. I wired these facts to Europe and sent out neostyle copies through this country. The observer there is a careful, reliable man and there is no reason to doubt that the light existed. It was given as from a well-known geographical point on Mars. That was all. Now the story has gone the world over. In Europe it is stated that I have been in communication with Mars, and all sorts of exaggerations have spring up. Whatever the light was, we have no means of knowing. Whether it had intelligence or not, no one can say. It is absolutely inexplicable. Pickering later proposed creating a set of mirrors in Texas with the intention of signaling Martians. In recent decades, the high resolution mapping of the surface of Mars, culminating in Mars Global Surveyor, revealed no artifacts of habitation by 'intelligent' life, but pseudoscientific speculation about intelligent life on Mars continues from commentators such as Richard C. Hoagland. Reminiscent of the canali controversy, some speculations are based on small scale features perceived in the spacecraft images, such as 'pyramids' and the 'Face on Mars'. Planetary astronomer Carl Sagan wrote: Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears.|20|20|Carl Sagan|Cosmos ### In fiction The depiction of Mars in fiction has been stimulated by its dramatic red color and by early scientific speculations that its surface conditions not only might support life, but intelligent life. Thus originated a large number of science fiction scenarios, the best known of which is H. G. Wells' The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth. A subsequent radio version of The War of the Worlds on October 30, 1938 was presented as a live news broadcast, and many listeners mistook it for the truth. Also influential were Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization, Edgar Rice Burroughs' Barsoom series and a number of Robert A. Heinlein stories before the mid-sixties. A comic figure of an intelligent Martian, Marvin the Martian, appeared on television in 1948 as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. Author Jonathan Swift made reference to the moons of Mars, about 150 years before their actual discovery by Asaph Hall, detailing reasonably accurate descriptions of their orbits, in the 19th chapter of his novel Gulliver's Travels. After the Mariner and Viking spacecraft had returned pictures of Mars as it really is, an apparently lifeless and canal-less world, these ideas about Mars had to be abandoned and a vogue for accurate, realist depictions of human colonies on Mars developed, the best known of which may be Kim Stanley Robinson's Mars trilogy. However, pseudo-scientific speculations about the Face on Mars and other enigmatic landmarks spotted by space probes have meant that ancient civilizations continue to be a popular theme in science fiction, especially in film. Another popular theme, particularly among American writers, is the Martian colony that fights for independence from Earth. This is a major plot element in the novels of Greg Bear and Kim Stanley Robinson, as well as the movie Total Recall (based on a short story by Philip K. Dick) and the television series Babylon 5. Many video games also use this element, including Red Faction and the Zone of the Enders series. Mars (and its moons) were also the setting for the popular Doom video game franchise and the later Martian Gothic. ### In music In Gustav Holst's The Planets, Mars is depicted as the "Bringer of War". ## Notes 1. Best fit ellipsoid 2. There are many serpentinization reactions. Olivine is a solid solution between forsterite and fayalite whose general formula is $\left(Fe,Mg\right)_2SiO_4$. The reaction producing methane from olivine can be written (in balanced form) as: Forsterite + Fayalite + Water + Carbonic acid → Serpentine + Magnetite + Methane , or: $18 Mg_2SiO_4 + 6 Fe_2SiO_4 + 26 H_2O + CO_2$$12 Mg_3Si_2O_5\left(OH\right)_4 + 4 Fe_3O_4 + CH_4$
2014-11-26 06:33:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3707811236381531, "perplexity": 2615.4718690176796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931006064.45/warc/CC-MAIN-20141125155646-00140-ip-10-235-23-156.ec2.internal.warc.gz"}
https://matplotlib.org/stable/api/_as_gen/mpl_toolkits.mplot3d.art3d.Patch3DCollection.html
# mpl_toolkits.mplot3d.art3d.Patch3DCollection¶ class mpl_toolkits.mplot3d.art3d.Patch3DCollection(*args, zs=0, zdir='z', depthshade=True, **kwargs)[source] A collection of 3D patches. Create a collection of flat 3D patches with its normal vector pointed in zdir direction, and located at zs on the zdir axis. 'zs' can be a scalar or an array-like of the same length as the number of patches in the collection. Constructor arguments are the same as for PatchCollection. In addition, keywords zs=0 and zdir='z' are available. Also, the keyword argument depthshade is available to indicate whether or not to shade the patches in order to give the appearance of depth (default is True). This is typically desired in scatter plots. __init__(self, *args, zs=0, zdir='z', depthshade=True, **kwargs)[source] Create a collection of flat 3D patches with its normal vector pointed in zdir direction, and located at zs on the zdir axis. 'zs' can be a scalar or an array-like of the same length as the number of patches in the collection. Constructor arguments are the same as for PatchCollection. In addition, keywords zs=0 and zdir='z' are available. Also, the keyword argument depthshade is available to indicate whether or not to shade the patches in order to give the appearance of depth (default is True). This is typically desired in scatter plots. __module__ = 'mpl_toolkits.mplot3d.art3d' do_3d_projection(self, renderer=<deprecated parameter>)[source] get_depthshade(self)[source] get_edgecolor(self)[source] get_facecolor(self)[source] set_3d_properties(self, zs, zdir)[source] set_depthshade(self, depthshade)[source] Set whether depth shading is performed on collection members. Parameters: depthshadeboolWhether to shade the patches in order to give the appearance of depth. set_sort_zpos(self, val)[source] Set the position to use for z-sorting.
2021-06-16 18:12:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37129151821136475, "perplexity": 3579.8250931858242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487625967.33/warc/CC-MAIN-20210616155529-20210616185529-00029.warc.gz"}
https://yetanothermathblog.com/2012/02/16/applications-of-graphs-to-boolean-functions/
# Applications of graphs to Boolean functions Let f be a Boolean function on $GF(2)^n$. The Cayley graph of f is defined to be the graph $\Gamma_f = (GF(2)^n, E_f )$, whose vertex set is $GF(2)^n$ and the set of edges is defined by $E_f =\{ (u,v) \in GF(2)^n \times GF(2)^n \ |\ f(u+v)=1\}$. The adjacency matrix $A_f$ is the matrix whose entries are $A_{i,j} = f(b(i) + b(j))$, where b(k) is the binary representation of the integer k. Note $\Gamma_f$ is a regular graph of degree wt(f), where wt denotes the Hamming weight of f when regarded as a vector of values (of length $2^n$). Recall that, given a graph $\Gamma$ and its adjacency matrix A, the spectrum Spec($\Gamma$) is the multi-set of eigenvalues of A. The Walsh transform of a Boolean function f is an integer-valued function over $GF(2)^n$ that can be defined as $W_f(u) = \sum_{x in GF(2)^n} (-1)^{f(x)+ \langle u,x\rangle}.$ A Boolean function f is bent if $|W_f(a)| = 2^{n/2}$ (this only makes sense if n is even). The Hadamard transform of a integer-valued function f is an integer-valued function over $GF(2)^n$ that can be defined as $H_f(u) = \sum_{x in GF(2)^n} f(x)(-1)^{\langle u,x\rangle}.$ It turns out that the spectrum of $\Gamma_f$ is equal to the Hadamard transform of f when regarded as a vector of (integer) 0,1-values. (This nice fact seems to have first appeared in [2], [3].) A graph is regular of degree r (or r-regular) if every vertex has degree r (number of edges incident to it).    We say that an r-regular graph $\Gamma$ is a strongly regular graph with parameters (v, r, d, e)  (for nonnegative integers e, d) provided, for all vertices u, v the number of vertices adjacent to both u, v is equal to e, if u, v are adjacent, d, if u, v are nonadjacent. It turns out tht f is bent iff $\Gamma_f$ is strongly regular and e = d (see [3] and [4]). The following Sage computations illustrate these and other theorems in [1], [2], [3], [4]. Consider the Boolean function $f: GF(2)^4 \to GF(2)$ given by $f(x_0,x_1,x_2) = x_0x_1+x_2x_3$. sage: V = GF(2)^4 sage: f = lambda x: x[0]*x[1]+x[2]*x[3] sage: CartesianProduct(range(16), range(16)) Cartesian product of [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] sage: C = CartesianProduct(range(16), range(16)) sage: Vlist = V.list() sage: E = [(x[0],x[1]) for x in C if f(Vlist[x[0]]+Vlist[x[1]])==1] sage: len(E) 96 sage: E = Set([Set(s) for s in E]) sage: E = [tuple(s) for s in E] sage: Gamma = Graph(E) sage: Gamma Graph on 16 vertices sage: VG = Gamma.vertices() sage: L1 = [] sage: L2 = [] sage: for v1 in VG: ....: for v2 in VG: ....: N1 = Gamma.neighbors(v1) ....: N2 = Gamma.neighbors(v2) ....: if v1 in N2: ....: L1 = L1+[len([x for x in N1 if x in N2])] ....: if not(v1 in N2) and v1!=v2: ....: L2 = L2+[len([x for x in N1 if x in N2])] ....: ....: sage: L1; L2 [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2] [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2] This implies the graph is strongly regular with d=e=2. sage: Gamma.spectrum() [6, 2, 2, 2, 2, 2, 2, -2, -2, -2, -2, -2, -2, -2, -2, -2] sage: [walsh_transform(f, a) for a in V] [4, 4, 4, -4, 4, 4, 4, -4, 4, 4, 4, -4, -4, -4, -4, 4] sage: Omega_f = [v for v in V if f(v)==1] sage: len(Omega_f) 6 sage: Gamma.is_bipartite() False sage: Gamma.is_hamiltonian() True sage: Gamma.is_planar() False sage: Gamma.is_regular() True sage: Gamma.is_eulerian() True sage: Gamma.is_connected() True sage: Gamma.is_triangle_free() False sage: Gamma.diameter() 2 sage: Gamma.degree_sequence() [6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6] sage: show(Gamma) # bent-fcns-cayley-graphs1.png Here is the picture of the graph: sage: H = matrix(QQ, 16, 16, [(-1)^(Vlist[x[0]]).dot_product(Vlist[x[1]]) for x in C]) sage: H [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1] [ 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1] [ 1 1 -1 -1 1 1 -1 -1 1 1 -1 -1 1 1 -1 -1] [ 1 -1 -1 1 1 -1 -1 1 1 -1 -1 1 1 -1 -1 1] [ 1 1 1 1 -1 -1 -1 -1 1 1 1 1 -1 -1 -1 -1] [ 1 -1 1 -1 -1 1 -1 1 1 -1 1 -1 -1 1 -1 1] [ 1 1 -1 -1 -1 -1 1 1 1 1 -1 -1 -1 -1 1 1] [ 1 -1 -1 1 -1 1 1 -1 1 -1 -1 1 -1 1 1 -1] [ 1 1 1 1 1 1 1 1 -1 -1 -1 -1 -1 -1 -1 -1] [ 1 -1 1 -1 1 -1 1 -1 -1 1 -1 1 -1 1 -1 1] [ 1 1 -1 -1 1 1 -1 -1 -1 -1 1 1 -1 -1 1 1] [ 1 -1 -1 1 1 -1 -1 1 -1 1 1 -1 -1 1 1 -1] [ 1 1 1 1 -1 -1 -1 -1 -1 -1 -1 -1 1 1 1 1] [ 1 -1 1 -1 -1 1 -1 1 -1 1 -1 1 1 -1 1 -1] [ 1 1 -1 -1 -1 -1 1 1 -1 -1 1 1 1 1 -1 -1] [ 1 -1 -1 1 -1 1 1 -1 -1 1 1 -1 1 -1 -1 1] sage: flist = vector(QQ, [int(f(v)) for v in V]) sage: H*flist (6, -2, -2, 2, -2, -2, -2, 2, -2, -2, -2, 2, 2, 2, 2, -2) sage: A = matrix(QQ, 16, 16, [f(Vlist[x[0]]+Vlist[x[1]]) for x in C]) sage: A.eigenvalues() [6, 2, 2, 2, 2, 2, 2, -2, -2, -2, -2, -2, -2, -2, -2, -2] Here is another example: $f: GF(2)^3 \to GF(2)$ given by $f(x_0,x_1,x_2) = x_0x_1+x_2$. sage: V = GF(2)^3 sage: f = lambda x: x[0]*x[1]+x[2] sage: Omega_f = [v for v in V if f(v)==1] sage: len(Omega_f) 4 sage: C = CartesianProduct(range(8), range(8)) sage: Vlist = V.list() sage: E = [(x[0],x[1]) for x in C if f(Vlist[x[0]]+Vlist[x[1]])==1] sage: E = Set([Set(s) for s in E]) sage: E = [tuple(s) for s in E] sage: Gamma = Graph(E) sage: Gamma Graph on 8 vertices sage: sage: VG = Gamma.vertices() sage: L1 = [] sage: L2 = [] sage: for v1 in VG: ....: for v2 in VG: ....: N1 = Gamma.neighbors(v1) ....: N2 = Gamma.neighbors(v2) ....: if v1 in N2: ....: L1 = L1+[len([x for x in N1 if x in N2])] ....: if not(v1 in N2) and v1!=v2: ....: L2 = L2+[len([x for x in N1 if x in N2])] ....: sage: L1; L2 [2, 0, 2, 2, 2, 2, 0, 2, 2, 2, 0, 2, 2, 2, 2, 0, 0, 2, 2, 2, 2, 0, 2, 2, 2, 0, 2, 2, 2, 2, 0, 2] [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2] This implies that the graph is not strongly regular, therefore f is not bent. sage: Gamma.spectrum() [4, 2, 0, 0, 0, -2, -2, -2] sage: sage: Gamma.is_bipartite() False sage: Gamma.is_hamiltonian() True sage: Gamma.is_planar() False sage: Gamma.is_regular() True sage: Gamma.is_eulerian() True sage: Gamma.is_connected() True sage: Gamma.is_triangle_free() False sage: Gamma.diameter() 2 sage: Gamma.degree_sequence() [4, 4, 4, 4, 4, 4, 4, 4] sage: H = matrix(QQ, 8, 8, [(-1)^(Vlist[x[0]]).dot_product(Vlist[x[1]]) for x in C]) sage: H [ 1 1 1 1 1 1 1 1] [ 1 -1 1 -1 1 -1 1 -1] [ 1 1 -1 -1 1 1 -1 -1] [ 1 -1 -1 1 1 -1 -1 1] [ 1 1 1 1 -1 -1 -1 -1] [ 1 -1 1 -1 -1 1 -1 1] [ 1 1 -1 -1 -1 -1 1 1] [ 1 -1 -1 1 -1 1 1 -1] sage: flist = vector(QQ, [int(f(v)) for v in V]) sage: H*flist (4, 0, 0, 0, -2, -2, -2, 2) sage: Gamma.spectrum() [4, 2, 0, 0, 0, -2, -2, -2] sage: A = matrix(QQ, 8, 8, [f(Vlist[x[0]]+Vlist[x[1]]) for x in C]) sage: A.eigenvalues() [4, 2, 0, 0, 0, -2, -2, -2] sage: show(Gamma) # bent-fcns-cayley-graphs2.png Here is the picture: REFERENCES: [1] Pantelimon Stanica, Graph eigenvalues and Walsh spectrum of Boolean functions, INTEGERS 7(2) (2007), #A32. [2] Anna Bernasconi, Mathematical techniques for the analysis of Boolean functions, Ph. D. dissertation TD-2/98, Universit di Pisa-Udine, 1998. [3] Anna Bernasconi and Bruno Codenotti, Spectral Analysis of Boolean Functions as a Graph Eigenvalue Problem,  IEEE TRANSACTIONS ON COMPUTERS, VOL. 48, NO. 3, MARCH 1999. [4] A. Bernasconi, B. Codenotti, J.M. VanderKam. A Characterization of Bent Functions in terms of Strongly Regular Graphs, IEEE Transactions on Computers, 50:9 (2001), 984-985. [5] Michel Mitton, Minimal polynomial of Cayley graph adjacency matrix for Boolean functions, preprint, 2007. [6] ——, On the Walsh-Fourier analysis of Boolean functions, preprint, 2006.
2022-05-25 19:57:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 22, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.566284716129303, "perplexity": 138.76106886717915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662593428.63/warc/CC-MAIN-20220525182604-20220525212604-00021.warc.gz"}
https://au.boazcommunitycorp.org/9821-2-01-in-class-assignment-welcome-to-matrix-algebra-wi.html
# 2: 01 In-Class Assignment - Welcome to Matrix Algebra with computational applications 2: 01 In-Class Assignment - Welcome to Matrix Algebra with computational applications ## Overview This course covers basic concepts of linear algebra, with an emphasis on computational techniques. Linear algebra plays a fundamental role in a wide range of applications from physical and social sciences, statistics, engineering, finance, computer graphics, big data and machine learning. We will study vectors, vector spaces, linear transformations, matrix-vector manipulations, solving linear systems, least squares problems, and eigenvalue problems. Matrix decompositions (e.g. LU, QR, SVD, etc.) play a fundamental role in the course. ## 2: 01 In-Class Assignment - Welcome to Matrix Algebra with computational applications Alexander Paulin [email protected] Linear Algebra and Differential Equations 54 (002 LEC) Spring 2018 Lectures: MWF, 11am-12pm, Wheeler Hall Auditorium. Office hours : MWF 2pm-4pm, TT 3pm-5pm, 796 Evans Hall. Discussion sections: Three, one hour sessions each week on MWF. Here is a link with further details. You may only attend the discussion section for which you are enrolled. Here is a link to GSI office hours. You may attend any office hour your please. (1/12) Welcome to Math 54! This fantastic course is an introduction to linear algebra and its applications to differential equations. At first glance linear algebra is just about solving systems of linear equations. However, after digging a little deeper, we'll discover a rich new language which will be applicable across all mathematical disciplines. This is something of a watershed course, opening up a whole branch of mathematics. It's going to be great! (1/12) Everything related to the course will be on this website. We will not be using bCourses. There will be weekly homework (posted below) which will be due every Friday in discussion section. In addition to this there will be weekly quizzes. I have office hours everyday of the week so there should always be an opportunity to get my help if you need it. If you can't make any office hours, e-mail me and we'll find another time to meet. In addition to this I will be posting my own lecture notes on this website at the end of each week. You'll be able to link to them directly from the detailed syllabus below. There will also be video recordings of the lectures posted at the end of Monday, Wednesday and Friday. Again you'll be able to link to them directly from the syllabus below. (1/12) Discussion sections will begin on Wednesday the 17th of January. (1/12) Make sure to read the course policy and the detailed syllabus below. (1/12) The first homework assignment will be due on Friday of week 2. The first quiz will also be on Friday of week 2. Linear Algebra and Differential Equations (UC Berkeley Custom Edition), 3rd Edition, ISBN: 9781323720868. This is a combination of two separate textbooks. These textbooks are Linear Algebra and its Applications, Lay-Lay-Macdonald, 5th Edition. Fundamentals of Differential Equations, Nagle-Saff-Snider, 9th Edition I strongly advise you against buying these textbooks individually. They contain far more material than will be needed and will be substantially more expensive. We've negotiated a reduced price for the custom textbook if you buy it directly from the publisher. Here's the link: You can also buy this textbook from the Cal Student Store for \$95 new. Homework assignments are due each Thursday in section. They will be posted here along with solutions and videos of me going through a selection of the more challenging problems. Your two lowest homework scores will be dropped. The homework corresponding to material covered during a given week is due in the following week's Thursday discussion session. Assignments will be graded on a coarse scale based on spot checks for correctness and completeness. Your two lowest scores will be dropped. You may discuss the homework problems with your classmates, but you must write your solutions on your own. Doing the work yourself is crucial to learning the material properly. Make use of discussion sections, office hours, study groups, etc. if you need assistance, but in the end, you should still write up your own solutions. I am aware that it is not hard to find solutions manuals on the internet. Copying said solutions on a homework assignment is illegal and will result in a negative grade for that assignment, and potentially in more serious consequences. (Also, it will not help you learn the material). The homework load for this course is heavy at times, but it is essential for learning the material. Be organized, and don't leave things for the last moment. (You cannot complete the homework assignment if you start on the night before it is due.) Work in small installments, and ask questions in section and during office hours. Quizzes will take place roughly every week in the Friday discussion section. They will last about 15 minutes, be of closely related to the homework problems for that week. Your two lowest scores will be dropped from your grade. Here is the quiz schedule: For more detailed information see the course policy There will be two midterms and a final. There will be no make-up exams, unless there are truly exceptional circumstances. Because of the grading scheme, you can miss one midterm, for whatever reason, without penalty. On the other hand, missing both midterms or missing the final will seriously harm your grade and make it very difficult/impossible to pass the course. Please check the dates now to make sure that you have no unavoidable conflicts! • First midterm: February 12 in class • Second midterm: March 23 in class • Final exam: May 8 (7pm-10pm) Calculators and notes will NOT be allowed for the exams. To obtain full credit for an exam question, you must obtain the correct answer and give a correct and readable derivation or justification of the answer. Unjustified correct answers will be regarded very suspiciously and will receive little or no credit. The graders are looking for demonstration that you understand the material. To maximize credit, cross out incorrect work. We will be scanning all exams so you will get them back electronically. After each midterm, there will be a brief window when you can request a regrade. In general, midterm exam grades cannot be changed. The only exception to this is then there has been a clerical error such as a mistake in adding the scores (if this is the case immediately inform your GSI) or if part of the solution has been accidentally overlooked by the grader. Regrade requests may result in a lowering of your grade. As per university policy, final exams cannot be regraded. DSP students requiring accommodations for exams must submit to the instructor a "letter of accommodation" from the Disabled Students Program at least two weeks in advance. Due to delays in processing, you are encouraged to contact the DSP office before the start of the semester. Cheating is unacceptable. Any student caught cheating will be reported to higher authorities for disciplinary action. There will be two midterms, the first on Monday February 12 and the second on Friday March 23 . The final exam will be on Monday May 7 (7pm - 10pm). For more detailed information see the course policy Each midterm and final score will first be curved into a number on a consistent scale. More precisely, I will assign a number to each exam (midterm 1, midterm 2 and the final) reflecting their relative position in the class. As an example, if you scored 70/120 on the first midterm and exactly 60 percent of the class got this score or below, you'd be assigned the scaled score of 60/100 for that midterm. These numbers are just a reflection of your relative performance. They do not correspond to letter grades in the usual sense. Section scores will be adjusted to account for differences between GSI's in quiz difficulty and grading standards. Your lowest scaled midterm score will be replaced by the scaled final exam score if it is higher. Finally, the scaled scores will be added up (with proportions outlined above) giving a final course score between 0 and 100. This score gives an extremely accurate description of your overall relative performance. This is not high school. For example, you do not need to get 90 or above to get an A. Your final letter grade will ultimately be decided by your ability to demonstrate a crisp understanding of the material and the ability to apply it to a diverse set of problems. Broadly speaking I will be looking for the following criteria for each letter grade: • A-/A/A+: A clear demonstration that the central concepts have been fully understood Computational techniques (and their many subtleties) have been mastered and can be applied accurately to a diverse problem set A strong understanding of how the abstract concepts can be applied to many real world applications. • B-/B/B+: Demonstration that the central concepts have been reasonably understood, but perhaps with minor misunderstandings Core computational techniques have been reaonably understood (but generally not key subtleties) and can be applied fairly accurately to a fairly large problem set Reasonable understanding of how the abstract concepts can be applied to some real world applications. • C-/C/C+: Demonstration that the central concepts have been vaguely understood, but with major misunderstandings Core computational techniques have been poorly understood and can be a applied accurately only in the most standard examples Weak understanding of how the abstract concepts can be applied to even basic real world applications. To be as fair as possible, I will also take into account the historic average of the class. This means that if I set an exam which is too difficult it will be taken into account in the final letter grades. Please note: incomplete grades, according to university policy, can be given only if unanticipated events beyond your control (e.g. a medical emergency) make it impossible for you to complete the course, and if you are otherwise passing (with a C- or above). Enrollment: For question about enrollment contact Jennifer Pinney. The Student Learning Center provides support for this class, including full adjunct courses, review sessions for exams, and drop-in tutoring. This is a truly fantastic resource. I definitely recommend you take advantage of it. Lecture Notes Chapter 6.2 ## Applied Linear Algebra Office: MSB 202 Phone : (860) 486 9153 Office Hours : T, Th 11:00-12:00 and by appointment . Open Door Policy: You are welcome to drop by to discuss any aspect of the course, anytime, on the days I am on campus-- Tuesdays and Thursdays. Section 001: Tuesday, Thursday 12:30-1:45. Classroom MSB 403 Section 005: Tuesday, Thursday 2:00-3:15. Classroom MSB 311 Linear Algebra and its Applications, by David C. Lay, 4th edition This course provides an introduction to the concepts and techniques of Linear Algebra. This includes the study of matrices and their relation to linear equations, linear transformations, vector spaces, eigenvalues and eigenvectors, and orthogonality. Homework will be assigned after every section, collected on Thursdays, and returned the following class. Solutions to selected homework exercises will be discussed and handed out at that time. For that reason, late homework will not usually be accepted. Homework assignments consist of individual practice exercises from the textbook (see Syllabus below) and occasional group projects. You are encouraged to work with other students in this class on all your homework assignments. Group projects, one report per group, will be graded for exam points. Textbook homework assignments, handed in individually, will not be graded, but will carry exam points (this will be explained in more details in class). You will need to show your work on exams and homework assignments, but may use calculators, in all cases, to double check your answers and save time on routine calculations. The recommended graphic Calculator is TI83 (best value for the money) but others will do as well. Exam Schedule and Guidelines There will be two evening exams during the semester and a final exam. None is strictly cumulative, but there will be overlap of material between the exams. NO MAKE-UP EXAMS unless there is a very serious emergency for which you provide proof. Quizzes will be given only if necessary. Note: PBB = Pharmacy/Biology Building (It is very close to MSB) PB = Physics Building ( It is on the first floor of MSB). Exam Schedule Sections 01 and 05 Exam Guidelines (an active link to each exam guidelines will appear in the week before each exam) Exam 1: Thursday, February 20, 6:30-8:30, PBB 129 PBB = Pharmacy and Biology Building Exam 1 Guidelines: Material and Review Suggestions Attention students: Solutions to homework exercises for sections 1.4, 1.5, 1.7 Exam 2: Thursday, April 3, 6:30-8:30, PBB 129 PBB = Pharmacy and Biology Building Exam 2 Guidelines: Material and Review Suggestions Attention students: Links to solutions to homework exercises can be found in the syllabus below Final Exam: Friday, May 9, 1:00-3:00, PB 38 PB = Physics Building Final Exam Guidelines: Material and Review Suggestions Attention students: Additional office hours: Friday, May 9, 11:00-12:30 For help with location of the exam building click on The Campus Map. UConn Final Exam Policy. Homework, quizzes, and group projects about 10%. Each Exam (including the Final Exam) is of equal weight, that is, about 30%. Extra Help: The Q Center and Textbook Website ## 2: 01 In-Class Assignment - Welcome to Matrix Algebra with computational applications A lot of students get confused while understanding the concept of time-complexity, but in this article, we will explain it with a very simple example: Imagine a classroom of 100 students in which you gave your pen to one person. Now, you want that pen. Here are some ways to find the pen and what the O order is. O(n 2 ): You go and ask the first person of the class, if he has the pen. Also, you ask this person about other 99 people in the classroom if they have that pen and so on, This is what we call O(n 2 ). O(n): Going and asking each student individually is O(N). O(log n): Now I divide the class into two groups, then ask: “Is it on the left side, or the right side of the classroom?” Then I take that group and divide it into two and ask again, and so on. Repeat the process till you are left with one student who has your pen. This is what you mean by O(log n). I might need to do the O(n 2 ) search if only one student knows on which student the pen is hidden. I’d use the O(n) if one student had the pen and only they knew it. I’d use the O(log n) search if all the students knew, but would only tell me if I guessed the right side. NOTE : We are interested in rate of growth of time with respect to the inputs taken during the program execution . Another Example: Time Complexity of algorithm/code is not equal to the actual time required to execute a particular code but the number of times a statement executes. We can prove this by using time command. For example, Write code in C/C++ or any other language to find maximum between N numbers, where N varies from 10, 100, 1000, 10000. And compile that code on Linux based operating system (Fedora or Ubuntu) with below command: You will get surprising results i.e. for N = 10 you may get 0.5ms time and for N = 10, 000 you may get 0.2 ms time. Also, you will get different timings on the different machine. So, we can say that actual time requires to execute code is machine dependent (whether you are using pentium1 or pentiun5) and also it considers network load if your machine is in LAN/WAN. Even you will not get the same timings on the same machine for the same code, the reason behind that the current network load. Now, the question arises if time complexity is not the actual time require executing the code then what is it? The answer is : Instead of measuring actual time required in executing each statement in the code, we consider how many times each statement execute. For example: ## MATH 340L: Matrices and Matrix Calculations rusin/340L/ It is unlikely that I will post any important material to Blackboard or Canvas for any additional information I want to give you outside of class you should come to this webpage. NEW EDITS You asked me to write down the homework assignments that I announced in class. I will keep them on a separate page here --> ### Catalogue Description: 1. Linear Equations in Linear Algebra, 2. Matrix Algebra, 3. Determinants, 4. Vector Spaces, 5. Eigenvalues and Eigenvectors, 6. Orthogonality and Least Squares, and 7. Symmetric Matrices and Quadratic Forms. There are two Linear Algebra courses at UT, Math 340L and Math 341, which are fairly similar. You cannot earn UT credit for both of them. Ordinarily, math majors must take Math 341, and no one else may. Math 340L focuses on computation and application Math 341 on theory and proof. Please see an advisor in MPAA (on the ground floor of RLM) if you need assistance enrolling in the appropriate Linear Algebra course. ### Pre-requisites One semseter of calculus with a grade of at least C- . Homework: I will assign homework problems, typically taken from the book, approximately weekly. I will use a grader to try to get as much of your responses graded as possible but I strongly encourage you to self-grade, that is, consult with me or your classmates to know that your answers are good. Remember, you do homework primarily to learn the material, not to score points. I will give a grade for each homework set, then drop the lowest two, then scale your remaining total to a 100-point scale as part of your semester grade. due Monday, Aug 31 due Wednesday, Sep 09 due Wednesday, Sep 16 due Friday, Sep 25 due Friday, Oct 02 due Friday, Oct 09 due Friday, Oct 16 due Wednesday, Nov 04 due Wednesday, Nov 11 due Friday, Nov 20 due Friday, Dec 04 Quizzes: I reserve the right to give a few pop quizzes during the semester. Each of these will be treated as another homework assignment (and in particular, for some of you these may be among the two dropped homework assignments). Exams: There will be 3 mid-term exams, to be held during the usual class period, and a comprehensive final exam. Each midterm is worth 100 points and the final is worth 200 points. Textbooks, notes, and electronic devices (including phones and calculators) are not permitted during exams. The exams will be a mix of multiple-choice and free-response questions the ratio will change as the semester progresses. No letter grades will be assigned to the midterms or homeworks, but you should keep track of where you stand: I will advise you of the class averages and you can use this information as a rough guideline to where you stand. ### Policies Classroom activity: Our meeting times together are very short so we must make the most of them. Come to class daily and ask questions this is greatly facilitated by reading ahead each day and doing the homework problems as they are assigned. Please silence your cell phones. I will always assume that any conversations I hear are about the course material so I may ask you to speak up. Make-ups: it is in general not possible to make up missing quizzes or homework assignments after the due date. If you believe you will have to miss a graded event, please notify me in advance I will try to arrange for you to complete the work early. Students with disabilities: The University of Texas at Austin provides upon request appropriate academic accommodations for qualified students with disabilities. For more information, contact the Office of the Dean of Students at 471-6259, 471-4641 TTY. Religious holidays: If you are unable to participate in a required class activity (such as an exam) because it conflicts with your religious traditions, please notify me IN ADVANCE and I will make accommodations for you. Typically I will ask you to complete the required work before the religious observance begins. Academic Integrity. Please read the message about Academic Integrity from the Dean of Students Office. I very much prefer to treat you as professionals whose honesty is beyond question but if my trust is violated I will follow the procedures available to me to see that dishonesty is exposed and punished. Campus safety: Please familiarize yourself with the Emergency Preparedness instructions provided by the university's Campus Safety and Security office. In the event of severe weather or a security threat, we will immediately suspend class and follow the instructions given. You may wish to sign up with the campus alert programs. Counseling: Students often encounter non-academic difficulties during the semester, including stresses from family, health issues, and lifestyle choices. I am not trained to help you with these but do encourage you to take advantage of the Counselling and Mental Health Center, Student Services Bldg (SSB), 5th Floor, open M-F 8am-5pm. (512 471 3515, or www.cmhc.utexas.edu Add dates: If you enroll within the first four class days of the semester, and have missed any graded material, I will adjust the weighting of your graded sections accordingly so that you are not penalized. No such accommodation is made for students who enroll on the 5th day or later. (Such students must enroll through the MPAA advising center in RLM, and ordinarily I do not admit students who ask to enroll then if they have missed any graded activities). Drop dates: Aug 31 is the last day to drop without approval of the department chair Sept 11 is the last day to drop the course for a possible refund Nov 3 is the last day an undergraduate student may, with the dean's approval, withdraw from the University or drop a class except for urgent and substantiated, nonacademic reasons. For more information about deadlines for adding and dropping the course under different circumstances, please consult the Registrar's web page, http://registrar.utexas.edu/calendars/14-15/ Computers: We don't make use of sophisticated software in this class, but if you find this interesting, you are welcome to use the department's computer facilities. Our 40-seat undergrad computer lab in RLM 7.122, is open to all students enrolled in Math courses. Students can sign up for an individual account themselves in the computer lab using their UT EID. We have most of the mainstream commercial math software: Mathematica, Maple, Matlab, etc., and an assortment of open source programs. If you come to my office you will see me use some of this software to help illustrate concepts. Please see me if you would like more information. You may find the online row reducer useful for completing computational homeworks. If you have a graphing calculator, you can do it in there as well. ### Schedule The following table is a tentative schedule for the course. Please be aware that material may be reordered, added or deleted. Pay attention in class --- I'll let you know if we're doing something different. ## Topics in Mathematics with Applications in Finance This is one of over 2,400 courses on OCW. Explore materials for this course in the pages linked along the left. MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum. No enrollment or registration. Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates. Knowledge is your reward. Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW. Made for sharing. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.) ## 2: 01 In-Class Assignment - Welcome to Matrix Algebra with computational applications • Welcome to CS131! • Schedule information may change during the quarter please visit the Syllabus page regularly to stay up to date. Lecture location has changed to 370-370 due to the high volume of student enrollment. Office: Room 246, Gates Building Office hours: Tuesday, 3-4pm Office: Room 243, Gates Building Office hours: By Appointment Class forum on Piazza (please ask all questions here if possible): piazza.com/stanford/fall2016/cs131 Additional reference material (not required): Computer Vision: A Modern Approach by Forsyth & Ponce Office hours: Thursday, 10am-12pm, Huang Basement Office hours: Tuesday and Thursday, 12-1pm, Huang Basement Office hours: Tuesday, 9:30-11:30am, Huang Basement Office hours: Wednesday, 12:45-2:45PM, Huang Basement For questions outside office hours, please use the class forum: Lectures: Tuesdays and Thursdays 1:30pm to 2:50pm in 370-370 We may have a few sessions at irregular times see the Syllabus. What do the following technologies have in common: robots that can navigate space and perform duties, search engines that can index billions of images and videos, algorithms that can diagnose medical images for diseases, or smart cars that can see and drive safely? Lying in the heart of these modern AI applications are computer vision technologies that can perceive, understand and reconstruct the complex visual world. Computer Vision is one of the fastest growing and most exciting AI disciplines in today’s academia and industry. This course is designed to open the doors for students who are interested in learning about the fundamental principles and important applications of computer vision. During the 10-week course, we will introduce a number of fundamental concepts in computer vision. We will expose students to a number of real-world applications that are important to our daily lives. More importantly, we will guide students through a series of well designed projects such that they will get to implement a few interesting and cutting-edge computer vision algorithms. Homework: 80% • HW0 (theoretical + programming): 8% • HW1 (theoretical): 13% • HW2 (programming and writeup): 13% • HW3 (theoretical): 13% • HW4 (programming and writeup): 13% • HW5 (theoretical + programming): 20% Extra credit: 2% for students who participate actively on piazza We strongly recommend using LaTex, but also accept other typed or scanned assignment. However, students must be responsible for the legibility and we reserve the right to deduct points if the solution is not clear. Here is the template for Latex. All assignments (with code attached) must be turned in to: GradeScope. Make an account and sign up for the class using the code: MBRJEM. All code must also be submitted via email to [email protected] as a zip file "yourSUNetID_HW[0-5]_code.zip". No paper submission is required for HWs. Using Late Days: • You have 5 free late days total. • You can use up to 3 late days per assignment. (Homework will not be accepted more than 3 days late.) • Please put number of late days used in the first page of your pdf. • If you have used all of your late days, there is a 25% penalty for each day late. • Explicitly mark the number of late days you use on an assignment if you are using late days. For example, if you turn it in by 5pm the next day, write "1 late day." If it's 5:01 pm the next day, write "2 late days." It is an honor code violation to write down the wrong time. (If you turn in late and don't write the number of days, we'll round up to 3.) We hope that you are familiar with: • College-level calculus (e.g. MATH 19 or 41) - You’ll need to be able to take a derivative, and maximize a function by finding where the derivative=0. • Linear algebra (e.g. MATH 51) - We will use matrix transpose, inverse, and other operations to do algebra with matrix expressions. We’ll use transformation matrices to rotate/transform points, and we’ll use Singular Value Decomposition. (These topics are important for the homeworks, but if you are a quick learner you should be able to learn them during the class if you haven’t yet. We will have review sessions and provide review materials.) • Basic probability and statistics (e.g. CS 109 or other stats course) - You should understand conditional probability, mean, and variance. • We also require a decent amount of programming skills, such as entry-level Matlab, and the ability to work in the Linux environment. If you are unsure about your background, we encourage you to try out Problem Set 0, which is a “normalizing” problem set for the class. HW0 will help you gauge if CS131 is the right level for you. ## Instructor Khatib's current research is in human-centered robotics, human-friendly robot design, dynamic simulations, and haptic interactions. His exploration in this research ranges from the autonomous ability of a robot to cooperate with a human to the haptic interaction of a user with an animated character or a surgical instrument. His research in human-centered robotics builds on a large body of studies he pursued over the past 25 years and published in over 200 contributions in the robotics field. Prof. Khatib was the Program Chair of ICRA2000 (San Francisco) and Editor of The Robotics Review'' (MIT Press). He has served as the Director of the Stanford Computer Forum, an industry affiliate program. He is currently the President of the International Foundation of Robotics Research, IFRR, and Editor of STAR, Springer Tracts in Advanced Robotics. Prof. Khatib is IEEE fellow, Distinguished Lecturer of IEEE, and recipient of the JARA Award.
2021-10-21 21:25:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42204564809799194, "perplexity": 1582.862397418775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585441.99/warc/CC-MAIN-20211021195527-20211021225527-00082.warc.gz"}
https://scriptverse.academy/tutorials/c-program-power-of-a-number.html
# C Program: Power of a Number When $n$ is a positive integer, an integer $a$ multiplied to itself $n$ times is represented by $a^{n}$. $a$ is called the base and $n$ is known as exponent. The result of the product is known as power. When $n$ is a whole number, $a^{2} = a \times a$ $a^{3} = a \times a \times a$ $a^{n} = a \times a \times a ... \times a$ When $n = 0$, $a^{0} = 1$ In the below C program, we create a function called power() which computes the power of an entered number to some desired exponent by recursion. #include <stdio.h> long power (int, int); int main() { int exp, n; printf("Number: "); scanf("%d", &n); printf("Exponent: "); scanf("%d", &exp); printf("%d^%d = %ld \n", n, exp, power(n, exp)); return 0; } long power (int number, int exponent) { if (exponent) { return (number * power(number, exponent - 1)); } else { return 1; } } We run the above program to find $3^{4}$, which gives the result as follows: $./a.out Number: 3 Exponent: 4 3^4 = 81 We can also get the power of a number using the C function pow(), which takes two arguments of type double, where the first argument is the number and the second argument is the exponent. So, pow(2,3) computes$2^{3}\$ and gives 8. The program below computes the power of a number using the pow() function. #include <stdio.h> #include <math.h> int main() { double n, exponent, result; printf("Number: "); scanf("%lf", &n); printf("Exponent: "); scanf("%lf",&exponent); printf("%.1lf^%.1lf = %.2lf \n", n, exponent, pow(n,exponent)); return 0; }
2020-04-02 00:06:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33162662386894226, "perplexity": 8462.511876720435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506477.26/warc/CC-MAIN-20200401223807-20200402013807-00522.warc.gz"}
https://www.transtutors.com/questions/p5-2a-198879.htm
# P5-2A x.Hmellspacing="0" c... • ### Principles (Solved) September 21, 2014 over 5 years c . How large must each payment be if the loan is for 50,000, the interest rate is 10%, and the loan is paid off in equal installments at the end of each of the next 10 years? This loan isfor the same amount as the loan in part b, but the payments are spread out over twice as many... The first step is to compute the required annual payment using present value of annuity which is expressed in the formula PV = A x PVIFA(r,n) or using Excel's PMT() function. After computing... • ### accounting problem (Solved) November 30, 2012 Cleaning Supplies, No. 130 Prepaid Insurance, No. 157 Equipment, No. 158 Accumulated Depreciation-Equipment, No. 201 Accounts Payable, No. 212 Salaries Payable, No. 301 C . Brown, Capital, No. 306, C . Brown, Drawing, No. 350 Income Summary, No. 400 Service Revenue, No. 633 Gas & Oil Expense, No.... • ### Politicization of GAAP (Solved) April 04, 2013 -making? ( c ) What arguments can be raised against the “politicization” of accounting rule-making? Ans a) The CAP is the first private sector accounting body of the world which are having main tasks to set accounting standards in USA. However, due to great interest and interference by the... • ### Using the following data, complete the balance sheet. (Solved) July 08, 2014 potential use. c . The land cost Blue Co. $11,000; it was recently assessed for real estate tax purposes at a value of$15,000. d. Blue Co.’s president isn’t sure of the amount of the note payable, but he does know that he signed a note. e. Since Blue Co. was formed, net income has totaled \$33
2018-07-19 21:09:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21585865318775177, "perplexity": 4844.478545343919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591296.46/warc/CC-MAIN-20180719203515-20180719223515-00083.warc.gz"}
https://gmatclub.com/forum/the-customs-department-has-3-2-of-its-inspected-items-opened-245278.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 24 Apr 2019, 01:08 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # The customs department has 3.2% of its inspected items opened... Author Message TAGS: ### Hide Tags Intern Joined: 06 Jan 2017 Posts: 10 Location: Italy GMAT 1: 720 Q47 V41 GPA: 4 The customs department has 3.2% of its inspected items opened...  [#permalink] ### Show Tags 22 Jul 2017, 11:57 4 00:00 Difficulty: 75% (hard) Question Stats: 58% (02:45) correct 42% (02:32) wrong based on 77 sessions ### HideShow timer Statistics The customs department has 3.2% of its inspected items opened and resealed before being released. It also has 20% of its opened items released but not resealed. What percentage of the items inspected are opened? (A) 2.5% (B) 3.0% (C) 3.5% (D) 4.0% (E) 4.5% Source: Optimus Prep I got lost on this one... actually I barely understood the meaning of the question. Kudos for the best explanation Senior PS Moderator Joined: 26 Feb 2016 Posts: 3386 Location: India GPA: 3.12 The customs department has 3.2% of its inspected items opened...  [#permalink] ### Show Tags 22 Jul 2017, 12:18 3 Assume the total inspected items are 1000. Since, 3.2% of the inspected items are opened and released(32 are opened and released) Also, 20% of the items were opened but not released Therefore, 80%(100-20) of opened items are not released. 80% of total opened items(which is x) is 32 $$x = \frac{32*100}{80} = 40$$ Therefore, the total number of opened items is 40, which is $$\frac{40}{1000}*100$$ or 4% of the items inspected(Option D) Hope that helps! _________________ You've got what it takes, but it will take everything you've got Manager Joined: 23 May 2017 Posts: 239 Concentration: Finance, Accounting WE: Programming (Energy and Utilities) Re: The customs department has 3.2% of its inspected items opened...  [#permalink] ### Show Tags 22 Jul 2017, 15:05 2 +D Attachment: FullSizeRender (15).jpg [ 71.63 KiB | Viewed 1364 times ] _________________ If you like the post, please award me Kudos!! It motivates me Intern Joined: 06 Feb 2016 Posts: 48 Location: Poland Concentration: Finance, Accounting GMAT 1: 730 Q49 V41 GPA: 3.5 Re: The customs department has 3.2% of its inspected items opened...  [#permalink] ### Show Tags 24 Jul 2017, 05:50 Basically, 80% of opened items are resealed which is equal to 3.2% of total items. So: 80% of opened = 3.2% of the total items 4% of the total is opened. Target Test Prep Representative Status: Founder & CEO Affiliations: Target Test Prep Joined: 14 Oct 2015 Posts: 5851 Location: United States (CA) Re: The customs department has 3.2% of its inspected items opened...  [#permalink] ### Show Tags 26 Jul 2017, 16:43 2 2 Alessiod wrote: The customs department has 3.2% of its inspected items opened and resealed before being released. It also has 20% of its opened items released but not resealed. What percentage of the items inspected are opened? (A) 2.5% (B) 3.0% (C) 3.5% (D) 4.0% (E) 4.5% We can let 1000 = the number of inspected items and x = the number of opened items. We are given that the number of inspected items opened and resealed is 3.2% or 32 items. We are also given that 20% of the opened items, or 0.2x, are not resealed. Since the number of opened resealed items plus the number of opened unsealed items equals the total number of opened items, we have: 32 + 0.2x = x 32 = 0.8x x = 32/0.8 = 320/8 = 40 Since there are 40 opened items out of 1000 inspected items, the percentage of inspected items that are opened is 40/1000 = 4/100 = 4%. _________________ # Scott Woodbury-Stewart Founder and CEO Scott@TargetTestPrep.com 122 Reviews 5-star rated online GMAT quant self study course See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews Intern Joined: 06 Jan 2017 Posts: 10 Location: Italy GMAT 1: 720 Q47 V41 GPA: 4 Re: The customs department has 3.2% of its inspected items opened...  [#permalink] ### Show Tags 27 Jul 2017, 01:37 ScottTargetTestPrep wrote: Alessiod wrote: The customs department has 3.2% of its inspected items opened and resealed before being released. It also has 20% of its opened items released but not resealed. What percentage of the items inspected are opened? (A) 2.5% (B) 3.0% (C) 3.5% (D) 4.0% (E) 4.5% We can let 1000 = the number of inspected items and x = the number of opened items. We are given that the number of inspected items opened and resealed is 3.2% or 32 items. We are also given that 20% of the opened items, or 0.2x, are not resealed. Since the number of opened resealed items plus the number of opened unsealed items equals the total number of opened items, we have: 32 + 0.2x = x 32 = 0.8x x = 32/0.8 = 320/8 = 40 Since there are 40 opened items out of 1000 inspected items, the percentage of inspected items that are opened is 40/1000 = 4/100 = 4%. Thank you Scott, extremely clear explanation. I was unable to understand the meaning of the data till now. Finally I got it Re: The customs department has 3.2% of its inspected items opened...   [#permalink] 27 Jul 2017, 01:37 Display posts from previous: Sort by
2019-04-24 08:08:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21244221925735474, "perplexity": 11404.658254456039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578636101.74/warc/CC-MAIN-20190424074540-20190424100540-00070.warc.gz"}
http://openstudy.com/updates/4fa42c54e4b029e9dc34b5a1
## candice1 Group Title jit4wonsolve each equation by completing the square. round to the nearest hundredth if necessary. 3. x^2+6x =16 4. x^2-10x =11 5. x^2-9x =0 6. x^2+16x =15 7. 3x^2+18x-81=0 use the quadratic formula to solve each equation. round to the nearest hundredth if necessary. 8. 4x^2+3x-8=0 9.2x^2-7x+3=0 10. x^2-2x+3=0 11. 2x^2+4x-7=0 I DONT GET THIS STUFF. 2 years ago 2 years ago 1. math456 Group Title so for quadratic stuff.. Use this formula..$-b \pm \sqrt{b ^{2}-4ac}/2a$ like for question 8: a=4; b=3; c=-8 <-- plug these number in the equation and solve.. 2. candice1 Group Title i knew i was missing a formula lol. thanks. 3. math456 Group Title lol kk welcome.. for completing the sq..http://www.purplemath.com/modules/sqrquad.htm 4. math456 Group Title for ur question 3. $x ^{2}+6x=16$ $x ^{2}+6x+9$ is a complete sq. so u take 3 inside the brackets and take +9 to the other side.. like, $(x+3)^{2}=16+9$ 5. math456 Group Title $(x+3)^{2}=25$ <-- take the sq. root on both sides.. $\sqrt{(x+3)^{2}}=\sqrt{5^{2}}$ 6. math456 Group Title $x+3=\pm5$ 7. candice1 Group Title 8. math456 Group Title x=5-3 ; x=-5-3 x=2 ; x=-8 No this is answer to question 3 9. candice1 Group Title oh okay thanks. 10. math456 Group Title y welcome 11. candice1 Group Title can you do the rest lol? 12. math456 Group Title try doing it, if you get stuck i will help u.. 13. candice1 Group Title i tried and im stuck. and i have to turn these problems in in 20 mins and its a test. 14. beenthebest Group Title X^2-6X=20 its saying round to the nearest hundredth can someone help me please
2014-07-31 13:55:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7068818807601929, "perplexity": 10458.29110499702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273350.41/warc/CC-MAIN-20140728011753-00360-ip-10-146-231-18.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/185742/iterated-exponent-of-i/185750
# Iterated exponent of $i$ WolframAlpha seems to tell me that $e^{e^{e^{e^{e^{e^{e^{e^{e^{e^{e^i}}}}}}}}}} = 1$, see link. Is this just an error or is it for real? Adding one more $e$ to the bottom of the tower gives me the number $e$, so it's specific to the 11 $e$'s I used in the tower. - Wolfram says it is 1.00000000000000... not just 1, so I think it is an approximation, otherwise it should have shown 1 only. –  pritam Aug 23 '12 at 7:07 This happens because with 10 $e$'s you get a number very very close to zero, because with 9 $e$'s you get a number with a very large negative real part, because with 8 $e$'s you get a number whose real part is large and imaginary part is between $\pi/2$ and $3\pi/2$. The trajectory of the iterations is chaotic. –  Rahul Aug 23 '12 at 7:11 You can readily check this using an independent method. Let $x_n + i y_n\in\mathbb{C}$ be the value of a tower of $n$ copies of $e$ with a single $i$ at the top, so that $x_{n+1}+iy_{n+1}=\exp(x_n+iy_n)$. This can be rewritten as $e^{x_n}\left(\cos y_n+i \sin y_n\right)$, giving the recursion $$x_{n+1}=e^{x_n}\cos y_n,\qquad y_{n+1}=e^{x_n}\sin y_n.$$ The starting values are $x_0=0$ and $y_0=1$. Evaluating this recursion numerically gives the following table: $$\begin{eqnarray} (x_1,y_1) &=& (0.5403023058681398, &&0.8414709848078965) \\ (x_2,y_2) &=& (1.1438356437916404, &&1.2798830013730222) \\ (x_3,y_3) &=& (0.9002890839010574, &&3.006900083345737) \\ (x_4,y_4) &=& (-2.438030346526128, &&0.3303849520417783) \\ (x_5,y_5) &=& (0.08260952954639851, &&0.028331354522507797) \\ (x_6,y_6) &=& (1.0856817633955023, &&0.030767067267249513) \\ (x_7,y_7) &=& (2.960056578435498, &&0.09110100745978908) \\ (x_8,y_8) &=& (19.21903374615272, &&1.7557331998479278) \\ (x_9,y_9) &=& (-40856897.72613553, &&218399070.28039825) \\ (x_{10},y_{10}) &=& (-0.0, &&-0.0) \\ (x_{11},y_{11}) &=& (1.0, &&-0.0), \end{eqnarray}$$ which seems to confirm what Alpha says. However, it should be clear that what's actually happening is that a large negative real part is reached at $n=9$. This produces a numerical zero at $n=10$, followed by $1.0000\ldots$ at $n=11$. While the correct value at $n=11$ is close to $1$, it's not exact. The exact value will differ from $1$ somewhere around the $18$-millionth digit. Indeed, Mathematica tells me the value at $n=10$ is actually about $(-4.72 - 3.51i)\times10^{-17743926}$. –  Rahul Aug 23 '12 at 7:15 Thanks, I knew it was too good to be true! I had hoped to come near of the complex fixed points $e^z = z$ but instead found this. –  Dog Aug 23 '12 at 7:17 As an additional note, the tower with 10 $e$'s gives a value of $\approx -4.722810346\times 10^{-17743926}-3.513602657\times 10^{-17743926}i$; taking the exponential of that will certainly give something very nearly equal to $1$. Alternatively, substituting this tiny value into the series $\exp\,x-1=x+\frac{x^2}{2}+\cdots$ is also illustrative. –  J. M. Aug 23 '12 at 7:19
2014-12-22 14:15:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8969779014587402, "perplexity": 291.1903248470417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775348.152/warc/CC-MAIN-20141217075255-00037-ip-10-231-17-201.ec2.internal.warc.gz"}
https://greenemath.com/Trigonometry/33/Ambiguous-Case-Law-of-SinesPracticeTest.html
Test Objectives • Demonstrate the ability to solve oblique triangles (SSA) Law of Sines Ambiguous Case Practice Test: #1: Instructions: Solve each triangle ABC. Round your answers to the nearest tenth. $$a)\hspace{.1em}A=94°, c=16 \hspace{.1em}\text{m}, a=16 \hspace{.1em}\text{m}$$ $$b)\hspace{.1em}A=147°, c=25 \hspace{.1em}\text{mi}, a=4 \hspace{.1em}\hspace{.1em}\text{mi}$$ #2: Instructions: Solve each triangle ABC. Round your answers to the nearest tenth. $$a)\hspace{.1em}B=46°, a=50 \hspace{.1em}\text{km}, b=47 \hspace{.1em}\text{km}$$ $$b)\hspace{.1em}B=114°, a=44 \hspace{.1em}\text{in}, b=78\hspace{.1em}\text{in}$$ #3: Instructions: Solve each triangle ABC. Round your answers to the nearest tenth. $$a)\hspace{.1em}C=77°, b=49 \hspace{.1em}\text{cm}, c=6 \hspace{.1em}\text{cm}$$ $$b)\hspace{.1em}A=45°, c=44 \hspace{.1em}\text{mi}, a=34\hspace{.1em}\text{mi}$$ #4: Instructions: Solve each triangle ABC. Round your answers to the nearest tenth. $$a)\hspace{.1em}A=82°, c=46 \hspace{.1em}\text{m}, a=5\hspace{.1em}\text{m}$$ $$b)\hspace{.1em}C=27°, b=44 \hspace{.1em}\text{in}, c=32 \hspace{.1em}\text{in}$$ #5: Instructions: Solve each triangle ABC. Round your answers to the nearest tenth. $$a)\hspace{.1em}C=149°, b=40 \hspace{.1em}\text{ft}, c=47 \hspace{.1em}\text{ft}$$ $$b)\hspace{.1em}B=121°, a=21 \hspace{.1em}\text{ft}, b=34 \hspace{.1em}\text{ft}$$ Written Solutions: #1: Solutions: $$a)\hspace{.1em}\text{Not a Triangle}$$ $$b)\hspace{.1em}\text{Not a Triangle}$$ #2: Solutions: $$a)\hspace{.1em}C=84.1°, A=49.9°, c=65 \hspace{.1em}\text{km}$$ $$\text{or}$$ $$C=3.9°, A=130.1°, c=4.4 \hspace{.1em}\text{km}$$ $$b)\hspace{.1em}C=35°, A=31°, c=49 \hspace{.1em}\text{in}$$ #3: Solutions: $$a)\hspace{.1em}\text{Not a Triangle}$$ $$b)\hspace{.1em}B=68.8°, C=66.2°, b=44.8 \hspace{.1em}\text{mi}$$ $$\text{or}$$ $$B=21.2°, C=113.8°, b=17.4 \hspace{.1em}\text{mi}$$ #4: Solutions: $$a)\hspace{.1em}\text{Not a Triangle}$$ $$b)\hspace{.1em}A=114.4°, B=38.6°, a=64.2 \hspace{.1em}\text{in}$$ $$\text{or}$$ $$A=11.6°, B=141.4°, a=14.2 \hspace{.1em}\text{in}$$ #5: Solutions: $$a)\hspace{.1em}B=26°, A=5°, a=8 \hspace{.1em}\text{ft}$$ $$b)\hspace{.1em}C=27°, A=32°, c=18 \hspace{.1em}\text{ft}$$
2021-04-22 23:10:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32852357625961304, "perplexity": 2807.592874220662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039563095.86/warc/CC-MAIN-20210422221531-20210423011531-00390.warc.gz"}
https://socratic.org/questions/580776e011ef6b51cf323976
# Question #23976 ##### 1 Answer Oct 24, 2016 $\left(0 , - \frac{1}{2}\right)$ #### Explanation: Given the coordinate points $\left({x}_{1} , {y}_{1}\right) \text{ and } \left({x}_{2} , {y}_{2}\right)$ then the coordinates of the midpoint $\left({x}_{m} , {y}_{m}\right)$ are found as follows. ${x}_{m} = \frac{1}{2} \left({x}_{1} + {x}_{2}\right) \text{ the average of the x-coordinates}$ and ${y}_{m} = \frac{1}{2} \left({y}_{1} + {y}_{2}\right) \text{ the average of the y-coordinates}$ here $\left({x}_{1} , {y}_{1}\right) = \left(- 1 , 2\right) \text{ and } \left({x}_{2} , {y}_{2}\right) = \left(1 , - 3\right)$ $\Rightarrow {x}_{m} = \frac{1}{2} \left(- 1 + 1\right) = 0$ and ${y}_{m} = \frac{1}{2} \left(2 - 3\right) = - \frac{1}{2}$ Hence coordinates of midpoint $= \left(0 , - \frac{1}{2}\right)$
2021-12-08 10:13:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9794421195983887, "perplexity": 3347.3524021155235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363465.47/warc/CC-MAIN-20211208083545-20211208113545-00151.warc.gz"}
http://mathoverflow.net/feeds/question/62695
Is there a fast way to compute matrix multiplication mod p? - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-19T03:00:02Z http://mathoverflow.net/feeds/question/62695 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/62695/is-there-a-fast-way-to-compute-matrix-multiplication-mod-p Is there a fast way to compute matrix multiplication mod p? 36min 2011-04-23T02:44:33Z 2011-04-23T16:49:17Z <p>I think people have some general strategy to do matrix multiplication fast. But what about for the finite field of $p$ elements? (e.g. when $p=2$, one should have some faster way.)</p> <p>So my question is, given two integer entry matrix, $A$ and $B$. Is there a fast way to compute $AB \mod p$?. And is there a fast way of computing $A^n \mod p$ for $n$ not too big. (Note that for $n=|GL_t(Z/p)|$, we have $A^n=I \mod p$, where $t$ is the size of $A$.)</p> http://mathoverflow.net/questions/62695/is-there-a-fast-way-to-compute-matrix-multiplication-mod-p/62748#62748 Answer by quid for Is there a fast way to compute matrix multiplication mod p? quid 2011-04-23T15:20:49Z 2011-04-23T15:20:49Z <p>For $\mathbb{Z}/2 \mathbb{Z}$ (and other finite fields of characteristic $2$) there is a specialized library for compting with matrices over that structure.</p> <p>It is called <a href="http://m4ri.sagemath.org/" rel="nofollow">M4RI</a> ; on this website in particular under further reading one can find various text related to this.</p> <p>In particular, a paper by developpers of the library:</p> <p>Martin Albrecht, Gregory Bard, William Hart. Algorithm 898: Efficient Multiplication of Dense Matrices over GF(2). ACM Transactions on Mathematical Software 2010. </p> <p>Preprint at <a href="http://arxiv.org/abs/0811.1714" rel="nofollow">http://arxiv.org/abs/0811.1714</a>.</p> <p>Yet, the main point there, as far as my understanding goes, is that modolu $2$ arithmetic can be done very efficiently on a computer and the point is to really optimize the methods to exploit this.</p> <p>Let me also say some things in part already in the comments:</p> <p>I (also) believe for multiplication if one counts say just number of multiplications over the base-structure and takes this as a measure of 'goodness' it does not matter whether one is over $\mathbb{Z}$ or modulo $n$. This also seems to be in line with the fact that in M4RI <code>$O(n^{\log_2 7})$</code> is mentioned and one also has this for integers.</p> <p>Yet, for certain computations related to integer matrices, it <em>is</em> useful to pass to modular arithmetic and then go back, yet (only) due to the fact that the arithemtic over the base structure (mod $n$ vs. integers) can be much faster; in particular, if the integers involved are large or (perhaps more importantly) can grow large in the process.</p> <p>For example, to compute determinants of integers matrices a strategy can be to on the one hand compute a bound on the determinant (e.g., using Hadamard's bound) and<br> on the other hand to compute the determinant modulo many primes. As then combinining the modulo $p$ pieces of information on the determinant, one knows the determinant modulo such a large modulus that actually there remains only a <em>unique</em> integer satisfying both the congruence conditions and the size condition (implied by the bound). </p> <p>Things like this are for example discussed in H. Cohen, A course in computational Algebraic Number Theory, Springer GTM 138.</p> http://mathoverflow.net/questions/62695/is-there-a-fast-way-to-compute-matrix-multiplication-mod-p/62753#62753 Answer by aginensky for Is there a fast way to compute matrix multiplication mod p? aginensky 2011-04-23T15:43:27Z 2011-04-23T15:43:27Z <p>The improvement of matrix multiplication from $O(n^3)$ to $O(n^{2.4})$ is based on the Strassen equations for matrix multiplication. <a href="http://mathoverflow.net/questions/57725/strassen-algorithm-7-multiplications" rel="nofollow">http://mathoverflow.net/questions/57725/strassen-algorithm-7-multiplications</a> talks about this and gives further references. If your question is rephrased as "are there special equations for matrix multiplication in characteristic $p$ ?", then I think the answer, if known, will be in one of those references. If I were a betting man, I'd bet no. As a person not adverse to speculation, I am highly skeptical. Matrix multiplication feels to me to be characteristic independent and of the flavor if it was true in all large (positive) characteristics, then it would be true in characteristic zero. Also, almost all results about equations of determinantal varieties, secant varieties etc don't seem to be different in any characteristic. Of course, this is not true on the nose as higher syzygies can be different,and in my very feeble understanding of such matters, more complicated. Anyone in a position to terminate or validate my musings?</p> http://mathoverflow.net/questions/62695/is-there-a-fast-way-to-compute-matrix-multiplication-mod-p/62759#62759 Answer by Henry Cohn for Is there a fast way to compute matrix multiplication mod p? Henry Cohn 2011-04-23T16:49:17Z 2011-04-23T16:49:17Z <p>For any field, we can define the exponent of matrix multiplication over that field to be the smallest number $\omega$ such that $n \times n$ matrix multiplication can be done in $n^{\omega + o(1)}$ field operations as $n \to \infty$. Schönhage proved that it is invariant under taking field extensions, so it depends only on the characteristic of the field. Probably it equals $2$ in every characteristic, but that isn't known. (Certainly $\omega \ge 2$, since it takes at least one operation to get each entry.)</p> <p>Let $\omega_p$ be the exponent in characteristic $p$. Then one can show that $\limsup_{p \to \infty} \omega_p \le \omega_0$, basically because every low-rank tensor decomposition in characteristic $0$ will work for all but finitely many primes. Over the rationals, you just need to avoid primes that occur in denominators.</p> <p>However, for small primes the exponent could (as far as we know) be better or worse than in characteristic $0$, and it's even possible that it could be substantially better for all primes, although in that case you can show that as $p \to \infty$ the asymptotics would have to take longer and longer to kick in (i.e., the size of the matrices needed to see the improvement would grow as a function of $p$).</p> <p>Strassen's exponent $2.8$ algorithm works in every characteristic, and it's the only practical method that achieves exponent less than $3$. However, if you want to do this in practice, it's important to think about issues like cache and memory access. (These issues are often more important than counting arithmetic operations.) Unless you really know what you are doing, it's not worth trying to write your own code, except as a learning exercise. For example, in characteristic $2$ the M4RI code mentioned by unknown (google) seems like it could be a good bet.</p>
2013-06-19 03:00:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9127598404884338, "perplexity": 528.8395606055826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707440258/warc/CC-MAIN-20130516123040-00012-ip-10-60-113-184.ec2.internal.warc.gz"}
https://qiskit.org/documentation/locale/ja_JP/stubs/qiskit.circuit.library.YGate.html
# qiskit.circuit.library.YGate¶ class YGate(label=None)[ソース] The single-qubit Pauli-Y gate ($$\sigma_y$$). Matrix Representation: $\begin{split}Y = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}\end{split}$ Circuit symbol: ┌───┐ q_0: ┤ Y ├ └───┘ Equivalent to a $$\pi$$ radian rotation about the Y axis. A global phase difference exists between the definitions of $$RY(\pi)$$ and $$Y$$. $\begin{split}RY(\pi) = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} = -i Y\end{split}$ The gate is equivalent to a bit and phase flip. $\begin{split}|0\rangle \rightarrow i|1\rangle \\ |1\rangle \rightarrow -i|0\rangle\end{split}$ Create new Y gate. __init__(label=None)[ソース] Create new Y gate. Methods __init__([label]) Create new Y gate. add_decomposition(decomposition) Add a decomposition of the instruction to the SessionEquivalenceLibrary. Assemble a QasmQobjInstruction broadcast_arguments(qargs, cargs) Validation and handling of the arguments and its relationship. c_if(classical, val) Add classical condition on register classical and value val. control([num_ctrl_qubits, label, ctrl_state]) Return a (multi-)controlled-Y gate. copy([name]) Copy of the instruction. Return inverted Y gate ($$Y{\dagger} = Y$$) Return True .IFF. DEPRECATED: use instruction.reverse_ops(). power(exponent) Creates a unitary gate as gate^exponent. Return a default OpenQASM string for the instruction. Creates an instruction with gate repeated n amount of times. For a composite instruction, reverse the order of sub-instructions. soft_compare(other) Soft comparison between gates. Return a Numpy.array for the gate unitary matrix. validate_parameter(parameter) Gate parameters should be int, float, or ParameterExpression Attributes decompositions Get the decompositions of the instruction from the SessionEquivalenceLibrary. definition Return definition in terms of other basic gates. duration Get the duration. label Return gate label params return instruction params. unit Get the time unit of duration. add_decomposition(decomposition) Add a decomposition of the instruction to the SessionEquivalenceLibrary. assemble() Assemble a QasmQobjInstruction Instruction broadcast_arguments(qargs, cargs) Validation and handling of the arguments and its relationship. For example, cx([q[0],q[1]], q[2]) means cx(q[0], q[2]); cx(q[1], q[2]). This method yields the arguments in the right grouping. In the given example: in: [[q[0],q[1]], q[2]],[] outs: [q[0], q[2]], [] [q[1], q[2]], [] • If len(qargs) == 1: [q[0], q[1]] -> [q[0]],[q[1]] • If len(qargs) == 2: [[q[0], q[1]], [r[0], r[1]]] -> [q[0], r[0]], [q[1], r[1]] [[q[0]], [r[0], r[1]]] -> [q[0], r[0]], [q[0], r[1]] [[q[0], q[1]], [r[0]]] -> [q[0], r[0]], [q[1], r[0]] • If len(qargs) >= 3: [q[0], q[1]], [r[0], r[1]], ...] -> [q[0], r[0], ...], [q[1], r[1], ...] パラメータ • qargs (List) – List of quantum bit arguments. • cargs (List) – List of classical bit arguments. Tuple[List, List] A tuple with single arguments. CircuitError – If the input is not valid. For example, the number of arguments does not match the gate expectation. c_if(classical, val) Add classical condition on register classical and value val. control(num_ctrl_qubits=1, label=None, ctrl_state=None)[ソース] Return a (multi-)controlled-Y gate. One control returns a CY gate. パラメータ • num_ctrl_qubits (int) – number of control qubits. • label (str or None) – An optional label for the gate [Default: None] • ctrl_state (int or str or None) – control state expressed as integer, string (e.g. 『110』), or None. If None, use all 1s. controlled version of this gate. ControlledGate copy(name=None) Copy of the instruction. パラメータ name (str) – name to be given to the copied circuit, if None then the name stays the same. a copy of the current instruction, with the name updated if it was provided qiskit.circuit.Instruction property decompositions Get the decompositions of the instruction from the SessionEquivalenceLibrary. property definition Return definition in terms of other basic gates. property duration Get the duration. inverse()[ソース] Return inverted Y gate ($$Y{\dagger} = Y$$) is_parameterized() Return True .IFF. instruction is parameterized else False property label Return gate label str mirror() DEPRECATED: use instruction.reverse_ops(). a new instruction with sub-instructions reversed. qiskit.circuit.Instruction property params return instruction params. power(exponent) Creates a unitary gate as gate^exponent. パラメータ exponent (float) – Gate^exponent To which to_matrix is self.to_matrix^exponent. qiskit.extensions.UnitaryGate CircuitError – If Gate is not unitary qasm() Return a default OpenQASM string for the instruction. Derived instructions may override this to print in a different format (e.g. measure q[0] -> c[0];). repeat(n) Creates an instruction with gate repeated n amount of times. パラメータ n (int) – Number of times to repeat the instruction Containing the definition. qiskit.circuit.Instruction CircuitError – If n < 1. reverse_ops() For a composite instruction, reverse the order of sub-instructions. This is done by recursively reversing all sub-instructions. It does not invert any gate. a new instruction with sub-instructions reversed. qiskit.circuit.Instruction soft_compare(other) Soft comparison between gates. Their names, number of qubits, and classical bit numbers must match. The number of parameters must match. Each parameter is compared. If one is a ParameterExpression then it is not taken into account. パラメータ other (instruction) – other instruction. are self and other equal up to parameter expressions. bool to_matrix() Return a Numpy.array for the gate unitary matrix. if the Gate subclass has a matrix definition. np.ndarray CircuitError – If a Gate subclass does not implement this method an exception will be raised when this base class method is called. property unit Get the time unit of duration. validate_parameter(parameter) Gate parameters should be int, float, or ParameterExpression
2021-05-06 23:04:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3297273814678192, "perplexity": 14699.732243506447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988763.83/warc/CC-MAIN-20210506205251-20210506235251-00611.warc.gz"}
http://tex.stackexchange.com/tags/tipa/hot
# Tag Info 12 Could be tweaked a bit more but.... \documentclass{scrartcl} \usepackage{tipa,graphics} \makeatletter \providecommand\xloweraccent{\@ifnextchar[{\lower@accent x\empty}% {\lower@accent x\empty[\z@]}} \def\brak#1{\xloweraccent{% \raisebox{-.3ex}{\resizebox{!}{.6ex}{\bfseries(}}% {\fontencoding{T3}\selectfont\char12}% ... 11 it's always better to use "precomposed" characters rather than trying to cobble something together. this may involve some work with font tools, and others on this list are better able to address that than i. however, many fonts created for use within linguistics environments are available from sil international, and i suggest looking there before trying to ... 10 Thank you very much, David Carlisle! After learning how it works from your answer, I have been able to make my own tweaks: Instead of the brackets and the x, I am using the tipa characters that are produced with \textsublhalfring, \textsubrhalfring and \textovercross respectively. I also forced the bracketed diacritics to be always upright, so the kerning is ... 10 The Arabic font is not a factor. The problem is that fontspec redefines \textipa under the assumption that the Latin Modern fonts have the IPA glyphs, which however should be called by Unicode. Solution: restore the Computer Modern fonts for IPA. \documentclass[12pt,openany]{book} \pagestyle{plain} \usepackage[margin=1.8cm]{geometry} \geometry{a4paper} ... 10 Since this answer might be of use to other linguists, I'm giving a detailed answer of how to effectively transfer documents from Word to TeX, assuming you are using the regular SIL phonetic fonts in Word. The modern TeX engines LuaTeX and XeLaTeX both use UTF-8 as their file encoding, and can use any OpenType font on your system. See the following for some ... 9 The warnings are harmless, and the substitutions will happen automatically. If you want to get rid of the warning you could redefine the \textipa command and the IPA environments to always use Computer Modern as shown in the example below. If you decide later to change to using e.g. mathptmx then you would need to change the definition of \tiparmdefault to ... 8 8 xunicode (loaded by fontspec) contains the definitions of tipa.sty: \documentclass[12pt]{article} \usepackage{fontspec} \setmainfont{Charis SIL} \begin{document} Blowzy DJ frumps vex a knight QC \textturna \textipa{[\!b] [\:r] [\;B]} \end{document} 7 The tipa package redefines some standard commands and this is the cause for the errors you get, but provides a “compatibility” layer. Notably the command it redefines are \s (alias \textsyllabic) \* (no alias provided) \| (no alias provided) \: (alias \tipamedspace) \; (alias \tipathickspace) \! (alias \tipanegthinspace) If you call the package with the ... 6 Do you need the fontenc package? Detexify says it should be \dh and that worked for me, with or without tipa \documentclass{report} \usepackage[T1]{fontenc} \usepackage{tipa} \begin{document} \dh \end{document} Link to Detexify for reference 6 It's the usual issue about xstring that by default tries to do full expansion of its arguments. \documentclass{memoir} \usepackage{xstring,tipa} \newcommand*\myreplace[1]{% \saveexpandmode\noexpandarg \StrSubstitute{#1}{:}{\textlengthmark}% \restoreexpandmode} \begin{document} \myreplace{gaga:} \end{document} If you don't need applications of ... 6 You can ignore it safely. Latin Modern fonts have no T3 encoding support. Since Latin Modern families come from Computer Modern fonts, it will works fine combining LM fonts with CM IPA fonts in T3 encoding. 5 I don't know anything about fonts, so I can't fully answer the question. However, a workaround might be to use kpfonts and replace your f's with s's. The following code gives: \documentclass{article} \usepackage[nofligatures,veryoldstyle]{kpfonts} \begin{document} \emph{osten ossing} \end{document} It is likely that if the character above is suitable, ... 5 Use \textipa{\|+{\;L}} with additional braces. The command \| executes \@omniaccent that requires two arguments, so with \|+\;L the second argument is \;, which is wrong. 5 It seems like a bug in silence.sty; it doesn't show if we patch \wrong@fontshape: \documentclass{article} \usepackage{silence,etoolbox} \makeatletter \patchcmd{\wrong@fontshape}{\@gobbletwo}{}{}{} \makeatother \WarningFilter{latexfont}{Font shape} \WarningFilter{latexfont}{Some font} \usepackage{lmodern} \usepackage[]{tipa} \begin{document} ... 5 The support for tipa input provided by fontspec comes from the xunicode package. However, it does not cover everything that tipa can do. A work-around is to prevent xuicode loading: \documentclass[a4paper,12pt, oneside]{article} \expandafter\def\csname ver@xunicode.sty\endcsname{} \usepackage[tone]{tipa} % tone invokes the tone letters ... 5 I'm not familiar with the tipa package but you can use the \textsuperscript command for typesetting superscript in text (not math) mode: \documentclass{article} \usepackage{tipa} \begin{document} \textipa{/bI"wIl.d\textsuperscript{@}r.IN/} \end{document} 5 Try \DeclareFontSubstitution{T3}{ptm}{m}{n} after \usepackage{tipa} 4 Using active characters is not recommended for this application: you lose the possibility of having any control sequence with an a in its name inside the argument of \myipa. How could you do it? Here's what's necessary. \documentclass{article} \usepackage{tipa} \newcommand\myipa{% \begingroup\catcode\a=\active \begingroup\lccode~=`a ... 4 You can use the IPA characters (of course you need that your file is encoded as UTF-8) % -*- coding: utf-8 -*- \documentclass[12pt]{article} \usepackage{fontspec} \setmainfont{Charis SIL} \begin{document} Blowzy DJ frumps vex a knight QC ɳ (U+0273), ɲ (U+0272), ʁ (U+0281), ɱ (U+0271), ə (U+0259) \end{document} 4 Leo Liu's answer says that you can safely ignore them. This answer is a complement to that answer. It tells you how to safely ignore them. (And it borrows heavily from Stefan Kottwitz's answer to an earlier question of mine) You can use the silence package to turn off warnings from the appropriate package. So \WarningFilter{latexfont}{Some font} ... 4 As already said by egreg, quoting Unicode's CodeCharts: U+0280 LATIN LETTER SMALL CAPITAL R voiced uvular trill Germanic, Old Norse uppercase is 01A6 Ʀ Tipa version: \documentclass{article} \usepackage{tipa} \begin{document} \texttt{\string\textscr}: \textscr \end{document} Wsuipa version: \documentclass{article} ... 3 The simplest way to do this would be to use macros for the variant characters and use a conditional to switch between them. Here's a simple example: \documentclass{article} \usepackage{tipa} \newif\ifregularIPA \regularIPAtrue \newcommand*{\GS}{\ifregularIPA\textglotstop\else\textsuperscript{?}\fi} \begin{document} \begin{IPA} \GS aral \regularIPAfalse ... 3 The problem arises because you are misusing both the \twoacc command and the \r command. \twoacc is for putting two accents on top of each other, and the symbol you want has one underneath and one on top, so \twoacc isn't needed. Also, the \r command takes an argument and puts the ring on top of that argument, so in your example ... 3 According to the documentation of TIPA, you can also use the command \super which is an abbreviated form of \textsuperscript. Like \textipa{t\super{h} k\super{w} a\super{bc} a\super{b\super{c}}}. 3 A variant of moewe's solution that works independently of the current font size. \documentclass{article} \usepackage[utf8]{inputenc} % UTF8 \usepackage[T1]{fontenc} % use T1 fonts for proper language support \usepackage[T1]{tipa} \makeatletter % \oalignb is like \oalign, but uses \vbox rather than \vtop \newcommand{\oalignb}[1]{% \leavevmode ... 2 This will make the stacked text the height of a lower-cased t in the current font size. If you wish to align the height to a different letter, change the t in the \scalerel macro argument to something else. The gap between the stack, prior to scaling, is set to .2ex in the current fontsize, which can be changed also. \scalerel allows an object to be ... 2 Using the tipa package, this is what I came up with. \textschwasci is a \tiny \textsci stacked over a \tiny \textschwa by putting the latter in a \raisebox and kerning it back directly over the former. In \ctextschwasci the two symbols are a bit closer together. As the symbols are quite small, \ctextschwasci is a slightly bigger, but somewhat uglier-looking ... 2 You can use \overset: \documentclass[11pt]{elsarticle} \usepackage[T1]{fontenc} \usepackage{mathtools} \begin{document} $\overset{\triangle}{\nabla}$ $\overset{\heartsuit}{\spadesuit}$ \end{document} 2 Based on Scott H.’s pointer, I can give myself an italic esh by coopting an italic long s as follows. (The general issue—about creating characters from existing parts—remains. I’ll post a separate question about that.) After invoking kpfonts.sty, I redefined \ss (as I don’t need ‘ß’) as a one-argument command: \renewcommand{\ss}[1]{% line ... Only top voted, non community-wiki answers of a minimum length are eligible
2014-09-22 08:29:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.940605103969574, "perplexity": 7408.758625545053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657136963.94/warc/CC-MAIN-20140914011216-00261-ip-10-234-18-248.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/integral-of-cosine-function.463505/
# Integral of cosine function Hi. I have been experimenting a little to come up with the following "conjecture" $$\int_0^{2\pi}d\phi f(a+b\cos\phi)\sin\phi=0$$ where a and b are arbitrary constants and f(x) is any function. Is this true? I guess it can be shown by expanding f in a power series of cosines? tiny-tim Homework Helper hi daudaudaudau! doesn't work for f = √, a = b LeonhardEuler Gold Member Yes, I'm getting that that this is true. It can be proven using the substitution $$x=\cos{\phi}$$ $$dx=-\sin{\phi}d\phi$$ But this substitution is not 1-to-1: Each x value corresponds to 2 phi values on [0,2pi]. So you need to break the region of integration into [0,pi] and [pi,2pi]. If you look at the graph of the cos function, you will see it is 1-to-1 on these two intervals and goes from 1 to -1 on [0,pi] and from -1 to 1 on [pi,2pi]. So the integral becomes: $$\int_0^{2\pi}d\phi f(a+b\cos\phi)\sin\phi= \int_1^{-1}-f(a+bx)dx + \int_{-1}^{1}-f(a+bx)dx = \int_{-1}^{1}f(a+bx)dx + \int_{-1}^{1}-f(a+bx)dx = 0$$ Tiny-tim: I get this even in the example you gave. tiny-tim Homework Helper hmm … i think i've been misled by the ambiguity of the √ function yes, we can get it directly from the original integral … if f = g', then it's ∫ [g(a + bcosφ)]' dφ, = [g(a + bcosφ)]φ=0 LeonhardEuler Gold Member hmm … i think i've been misled by the ambiguity of the √ function yes, we can get it directly from the original integral … if f = g', then it's ∫ [g(a + bcosφ)]' dφ, = [g(a + bcosφ)]φ=0 Happens to everyone That is a better way of proving it. hmm … i think i've been misled by the ambiguity of the √ function yes, we can get it directly from the original integral … if f = g', then it's ∫ [g(a + bcosφ)]' dφ, = [g(a + bcosφ)]φ=0 What happened to the sine function? tiny-tim Homework Helper chain rule clever :-) AlephZero
2021-02-28 04:49:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8845050930976868, "perplexity": 1145.2145325062143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360107.7/warc/CC-MAIN-20210228024418-20210228054418-00415.warc.gz"}
https://gamedev.stackexchange.com/questions/13739/understanding-obj-file
# Understanding .obj file I downloaded a 3d model of a car. By judging by the skins and images available model treats wheels and car body as separate nodes. I want to access its child nodes (wheels) to specify animation. But I don't know the names of its child nodes. Is it possible to figure out names of its child nodes by looking at the file content. I have a obj file of the 3D model. I am using JMonkey 2. EDIT • For the record, .obj is a pretty common file extension, you should first of all find out which of the different formats you are dealing with. – aaaaaaaaaaaa Jun 16 '11 at 9:59 • It is a Wavefront file, the groups of this file have numerical names 000 through 037. – aaaaaaaaaaaa Jun 16 '11 at 13:23 • @eBusiness Can you tell me how to access them.(I prefer JMonkey). That model is a Trimesh. Is there a way to identify which number refers which mesh. In this case group id of a wheel? – Niroshan Jun 16 '11 at 15:07 • I can't tell, and I don't know JMonkey. Though I'm not certain that there is an easy way of doing what you want, it's only the triangles that are grouped, not the vertices, and you'd animate a model by moving the vertices. But then again, I'm not an expert 3D programmer, I can't say for sure that there isn't some trick. – aaaaaaaaaaaa Jun 16 '11 at 16:18 According to the obj file specification it should be a string after the 'g' command, but from my experiences each obj exporter handles those things specifically. So those faces (and according vertexes, normals and textures) after line g part 013 to the next line with g command is the mesh of the wheel and so on..
2020-05-24 23:51:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2740667462348938, "perplexity": 991.8818901109294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347385193.5/warc/CC-MAIN-20200524210325-20200525000325-00237.warc.gz"}
https://mathematica.stackexchange.com/questions/69648/workingprecision-causes-issue-in-the-nintegrate
# WorkingPrecision causes issue in the NIntegrate I really can't figure out why my code sometimes is not working. My integrals involve two variables (k and kz). The integration range for both of them are from zero to infinity. I found out that in some cases (not often) when the working precision is in some specific range, Mathematica gives me an error message saying that "the integrand has evaluated to non-numerical values...". However, if I change WorkingPrecision to higher or lower value, it works just fine. I am not sure why. Hope you can help me with this. I copy my code here. Thanks a lot! Clear[sofindmassz]; sofindmassz[λ_, T_, μ_, Δ_, opt : OptionsPattern[{workprec -> 16}]] := Module[{wp, ϵ, ξ, E, dϵ, ddϵ, fBCSE, fBCSξ, f1d, mInth, mInt}, wp = OptionValue[workprec]; ϵ[h_] := k^2 + kz^2 + 2 λ h k; ξ[h_] := ϵ[h] - μ; E[h_] := √(ξ[h]^2 + Δ^2); dϵ[h_] := 2 kz; ddϵ = 2; f1d[h_] := -Exp[ξ[h]/T]/(T (Exp[ξ[h]/T] + 1)^2); fBCSE[h_] := Tanh[E[h]/(2 T)]; fBCSξ[h_] := Tanh[ξ[h]/(2 T)]; mInth[h_] := k ((2 f1d[h] + E[h]/Δ^2 ((1 + ξ[h]^2/E[h]^2) (fBCSE[h] - fBCSξ[h]) + (1 - ξ[h]/E[h])^2 fBCSξ[h])) (dϵ[h])^2 - 1/2 (fBCSξ[h] - ξ[h]/E[h] fBCSE[h]) ddϵ); mInt = mInth[1] + mInth[-1]; Return[-Quiet[ NIntegrate[ NIntegrate[mInt, {k, 0, kz}, Method -> {Automatic, "SymbolicProcessing" -> 0}, WorkingPrecision -> wp], {kz, 0, ∞}, Method -> {Automatic, "SymbolicProcessing" -> 0}, WorkingPrecision -> wp]] - Quiet[NIntegrate[ NIntegrate[mInt, {kz, 0, k}, Method -> {Automatic, "SymbolicProcessing" -> 0}, WorkingPrecision -> wp], {k, 0, ∞}, Method -> {Automatic, "SymbolicProcessing" -> 0}, WorkingPrecision -> wp]]]] For this code, if I run the following values, sofindmassz[0, 0.14921714620005236, 0.07455393513003296, 1.016522853606922, workprec -> 15] the error messages pops up. If I change workprec to 14. It works! Don't know what's wrong. • Welcome to Mathematica.SE! I suggest the following: 1) As you receive help, try to give it too, by answering questions in your area of expertise. 2) Read the faq! 3) When you see good questions and answers, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge. Also, please remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign! – Michael E2 Dec 25 '14 at 2:04 • The problem relates to the nesting of NIntegrate. See 69638. Why it singles out workprec -> 15 is unclear to me, although it may be related to shifting from double precision to higher. – bbgodfrey Dec 25 '14 at 3:22 • Other values of WorkingPrecision produce warning messages too, but Quiet suppresses them. – bbgodfrey Dec 25 '14 at 12:48 • Thanks, Michael and bbgodfrey. It works. – Chien-Te Wu Dec 25 '14 at 17:23 • @MichaelE2 I'd like to upvote it. However, I don't have enough reputation. Could you let me know if there is any other way to do that? Thanks! – Chien-Te Wu Jan 25 '15 at 21:08 One way to address the example is to provide input of the same or higher (arbitrary) precision, sofindmassz[0, 0.1492171462000523615, 0.0745539351300329615, 1.01652285360692215, workprec -> 15] (* 0.643229985259241 *) One can get overflow, probably from f1d[h_] := -Exp[ξ[h]/T]/(T (Exp[ξ[h]/T] + 1)^2) because the size of ξ[h] can be quite large ~ k^2 when integrating to infinity. The alternative f1d[h_] := -Exp[-ξ[h]/T]/(T (Exp[-ξ[h]/T] + 1)^2) leads to underflow and other problems. Depending on how I played with it, it might lead to a kernel crash from running out of memory or take so long I aborted it. It you plot the integrand, it looks good and smooth up to k == 1000; but as k exceeds 10000 it starts to get bumpy and the bumps grow larger. Here are some sample values of the integrand: Grid@Table[mInt, {k, 1000000, 10000000, 1000000}, {kz, 0, k, 1000000}] I get the same results no matter what precision I use for the parameters in sofindmassz. While they are not the same as the sample points used by NIntegrate, the occasional extremely large values (compared to the estimated value of the integral) are disturbing. It seems likely that numerical precision, underflow and/or overflow play a role in the evaluation of this integral. I've been unable to chase down the clear reasons for the difficulty. As a result I cannot comment on the accuracy of the result above, other than to say it was the most common result (up to the first 8 or digits). Using a double integral NIntegrate[mInt, {k, 0, ∞}, {kz, 0, k}, Method -> {Automatic, "SymbolicProcessing" -> 0}, WorkingPrecision -> wp] instead of nested integrals did not seem to help. (It behaves somewhat differently, but it does not behave better.) By the way, I am not a fan of using Quiet, especially to suppress numerics-related messages, unless I know why the warnings and errors are occurring and in fact expect them. Usually one needs to understand why Mathematica is warning of a possible numerical error. (If you get a lot, then you can set the preferences to log them in the messages window.) Replace your Return line (the final line in your code by fkz[z_?NumericQ] := NIntegrate[mInt, {k, 0, z}, WorkingPrecision -> wp]; fk[z_?NumericQ] := NIntegrate[mInt, {kz, 0, z}, WorkingPrecision -> wp]; Quiet[-NIntegrate[fkz[kz], {kz, 0, \[Infinity]}, WorkingPrecision -> wp] - NIntegrate[fk[k], {k, 0, \[Infinity]}, WorkingPrecision -> wp]]] to eliminate the unwanted warning message. Creating separate functions fkz and fk with argument z_?NumericQ is the solution. The other changes merely delete code that no longer is necessary. Note, however, that Quiet hides still other warning messages.
2020-10-23 09:06:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3685678243637085, "perplexity": 3255.9476004390854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880878.30/warc/CC-MAIN-20201023073305-20201023103305-00155.warc.gz"}
https://koalaverse.github.io/machine-learning-in-R/decision-trees.html
# Chapter 1 Classification and Regression Trees (CART) ## 1.1 Introduction Classification and regression trees (CART) are a non-parametric decision tree learning technique that produces either classification or regression trees, depending on whether the dependent variable is categorical or numeric, respectively. CART is both a generic term to describe tree algorithms and also a specific name for Breiman’s original algorithm for constructing classification and regression trees. • Decision Tree: A tree-shaped graph or model of decisions used to determine a course of action or show a statistical probability. • Classification Tree: A decision tree that performs classification (predicts a categorical response). • Regression Tree: A decision tree that performs regression (predicts a numeric response). • Split Point: A split point occurs at each node of the tree where a decision is made (e.g. x > 7 vs. x ≤ 7). • Terminal Node: A terminal node is a node which has no descendants (child nodes). Also called a “leaf node.” ## 1.2 Properties of Trees • Can handle huge datasets. • Can handle mixed predictors implicitly – numeric and categorical. • Easily ignore redundant variables. • Handle missing data elegantly through surrogate splits. • Small trees are easy to interpret. • Large trees are hard to interpret. • Prediction performance is often poor (high variance). ## 1.3 Tree Algorithms There are a handful of different tree algorithms in addition to Breiman’s original CART algorithm. Namely, ID3, C4.5 and C5.0, all created by Ross Quinlan. C5.0 is an improvement over C4.5, however, the C4.5 algorithm is still quite popular since the multi-threaded version of C5.0 is proprietary (although the single threaded is released as GPL). ## 1.4 CART vs C4.5 Here are some of the differences between CART and C4.5: • Tests in CART are always binary, but C4.5 allows two or more outcomes. • CART uses the Gini diversity index to rank tests, whereas C4.5 uses information-based criteria. • CART prunes trees using a cost-complexity model whose parameters are estimated by cross-validation; C4.5 uses a single-pass algorithm derived from binomial confidence limits. • With respect to missing data, CART looks for surrogate tests that approximate the outcomes when the tested attribute has an unknown value, but C4.5 apportions the case probabilistically among the outcomes. Decision trees are formed by a collection of rules based on variables in the modeling data set: 1. Rules based on variables’ values are selected to get the best split to differentiate observations based on the dependent variable. 2. Once a rule is selected and splits a node into two, the same process is applied to each “child” node (i.e. it is a recursive procedure). 3. Splitting stops when CART detects no further gain can be made, or some pre-set stopping rules are met. (Alternatively, the data are split as much as possible and then the tree is later pruned.) Each branch of the tree ends in a terminal node. Each observation falls into one and exactly one terminal node, and each terminal node is uniquely defined by a set of rules. ## 1.5 Splitting Criterion & Best Split The original CART algorithm uses the Gini Impurity, whereas ID3, C4.5 and C5.0 use Entropy or Information Gain (related to Entropy). ### 1.5.1 Gini Impurity Used by the CART algorithm, Gini Impurity is a measure of how often a randomly chosen element from the set would be incorrectly labeled if it was randomly labeled according to the distribution of labels in the subset. Gini impurity can be computed by summing the probability $$f_i$$ of each item being chosen times the probability $$1 − f_i$$ of a mistake in categorizing that item. It reaches its minimum (zero) when all cases in the node fall into a single target category. To compute Gini impurity for a set of m items, suppose $$i ∈ {1, 2, ..., m}$$, and let $$f_i$$ be the fraction of items labeled with value $$i$$ in the set. $I_{G}(f)=\sum _{i=1}^{m}f_{i}(1-f_{i})=\sum _{i=1}^{m}(f_{i}-{f_{i}}^{2})=\sum _{i=1}^{m}f_{i}-\sum _{i=1}^{m}{f_{i}}^{2}=1-\sum _{i=1}^{m}{f_{i}}^{2}=\sum _{i\neq k}f_{i}f_{k}$ ### 1.5.2 Entropy Entropy, $$H(S)$$, is a measure of the amount of uncertainty in the (data) set $$S$$ (i.e. entropy characterizes the (data) set $$S$$). $H(S)=-\sum _{{x\in X}}p(x)\log _{{2}}p(x)$ Where, - $$S$$ is the current (data) set for which entropy is being calculated (changes every iteration of the ID3 algorithm) - $$X$$ is set of classes in $$S$$ - $$p(x)$$ is the ratio of the number of elements in class $$x$$ to the number of elements in set $$S$$ When $$H(S)=0$$, the set $$S$$ is perfectly classified (i.e. all elements in $$S$$ are of the same class). In ID3, entropy is calculated for each remaining attribute. The attribute with the smallest entropy is used to split the set $$S$$ on this iteration. The higher the entropy, the higher the potential to improve the classification here. ### 1.5.3 Information gain Information gain $$IG(A)$$ is the measure of the difference in entropy from before to after the set $$S$$ is split on an attribute $$A$$: in other words, how much uncertainty in $$S$$ was reduced after splitting set $$S$$ on attribute $$A$$. $IG(A,S)=H(S)-\sum _{{t\in T}}p(t)H(t)$ Where, - $$H(S)$$ is the entropy of set $$S$$ - $$T$$ is the set of subsets created from splitting set $$S$$ by attribute $$A$$ such that $$S=\bigcup _{{t\in T}}t$$ - $$p(t)$$ is the ratio of the number of elements in $$t$$ to the number of elements in set $$S$$ - $$H(t)$$ is the entropy of subset $$t$$ In ID3, information gain can be calculated (instead of entropy) for each remaining attribute. The attribute with the largest information gain is used to split the set $$S$$ on this iteration. ## 1.6 Decision Boundary This is an example of a decision boundary in two dimensions of a (binary) classification tree. The black circle is the Bayes Optimal decision boundary and the blue square-ish boundary is learned by the classification tree. ## 1.7 Missing Data CART is an algorithm that deals effectively with missing values through surrogate splits. ## 1.8 Visualizing Decision Trees Tony Chu and Stephanie Yee designed an award-winning visualization of how decision trees work called “A Visual Introduction to Machine Learning.” Their interactive D3 visualization is available here. ## 1.9 CART Software in R Since it’s more common in machine learning to use trees in an ensemble, we’ll skip the code tutorial for CART in R. For reference, trees can be grown using the rpart package, among others.
2020-09-22 01:54:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3766236901283264, "perplexity": 1270.5833645840612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400202686.56/warc/CC-MAIN-20200922000730-20200922030730-00704.warc.gz"}
http://www.hpmuseum.org/forum/thread-1773-post-15891.html
WP34S: Reset when printing statistical registers? 07-03-2014, 02:45 PM Post: #21 walter b On Vacation Posts: 1,957 Joined: Dec 2013 RE: WP34S: Reset when printing statistical registers? (07-03-2014 11:56 AM)Marcus von Cube Wrote:  Here is a short summary of the possible startup messages: ... Memory seems to be intact: "Reset" That's not what you wanted to tell us, is it? When I start my WP 34S and "Memory seems to be intact" it will display the last content of X. It does so consistently. (07-03-2014 11:56 AM)Marcus von Cube Wrote:  "Reset" is an indication that the processor failed for some reason, be it a programming error (illegal instruction, stack corruption, etc.) or some physical condition (power too low for the selected processor speed). Hmmh, now we're almost as informed as before, aren't we? d:-? 07-03-2014, 09:53 PM Post: #22 Paul Dale Senior Member Posts: 1,378 Joined: Dec 2013 RE: WP34S: Reset when printing statistical registers? (07-03-2014 02:45 PM)walter b Wrote:  That's not what you wanted to tell us, is it? When I start my WP 34S and "Memory seems to be intact" it will display the last content of X. It does so consistently. Pressing the on button doesn't actually start the device. It was already running. Start up in this context is the CPU's initial boot process. Yes, the nomenclature is ambiguous - Pauli 07-03-2014, 10:13 PM Post: #23 jebem Senior Member Posts: 1,260 Joined: Feb 2014 RE: WP34S: Reset when printing statistical registers? (07-03-2014 10:39 AM)rkf Wrote: (07-03-2014 10:30 AM)Paul Dale Wrote:  ... Most likely, we're drawing too much current from the coin cells and their voltage is dropping because of this. I've seen this too from time to time and sometimes the low battery indicator comes on briefly. ... What a pity, just set ()DLAY to 0 again to provoke the Reset message, and watched closely the LCD panel while doing a print summation registers, trying to glimpse the Low Batt annunciator. Unfortunately, this time neither a Reset shows up, nor a broken I/O character is printed - no matter how often I try. ?!? Kind regards, Ralf. Interesting subject. A couple of questions... - How far can the calculator be from the printer and still get a good connection? IMHO the IR LED current should not be too high when compared with what the calculator processor can consume. I would say that the HP-30B can consume as much as 20mA in many situations, and the IR LED should only add a few mA more, maybe 2 to 4mA in a proper implementation and using one adequate and efficient IR LED type (however with that current values I would expect the working distance range to be small, like 30cm or about 1 foot at maximum). But the real IR LED current consumption can be higher, depending on the circuit used to drive it. If the IR LED series current limiter is absent or is of a very low value (below 330 ohm), it can deplete the batteries in shorter time than normal, but you should be able to communicate at larger distances in this case! From your description, the probable cause seems to be a voltage drop condition from the batteries you are using. Sometimes the batteries recovers by themselves, yes, but in the end it will happen again soon. And not all the battery brands are created equal... I would try a new set of battery cells and see. I get my batteries from China in the TAS for peanuts and the quality is more than acceptable for the asking price and free delivery. As a final note, if you enjoy a little diy and have a multimeter, you may use an external 3Volt power supply (two AA batteries in series will do), a couple of cable clips to connect it to your calculator and measure the current consumption when printing to find out more about your calculator current consumption. Jose Mesquita 07-03-2014, 10:15 PM (This post was last modified: 07-04-2014 03:33 AM by pito.) Post: #24 pito Member Posts: 127 Joined: Jun 2014 RE: WP34S: Reset when printing statistical registers? (07-03-2014 09:29 AM)walter b Wrote: (07-03-2014 09:24 AM)pito Wrote:  Do we measure the battery voltage (BATT) under full load?? When not, the BATT does not show the proper numbers.. Do you want to switch the WP 34S to full load just for measuring the voltage by BATT? d:-? Of course YES! When you measure the wp34s current in the key_scanning_mode it takes maybe 150uA current, when you switch to let say 30MHz it takes for example 40mA current. Provided the internal resistance of the CR2032 is 10-1000 ohm (based on its capacity consumed, up to 100ohm at 60% used) you may easily calculate the difference between the measurements.. (done for you - see below for a single CR2032 cell, for 2 in parallel half the Rin). PS: try to measure your car battery after a year of having it resting on the shelf in your garage - you will measure 14V when not loaded happily, but your car will definitely not start Attached File(s) Thumbnail(s) 07-04-2014, 04:37 AM Post: #25 rkf Member Posts: 52 Joined: Apr 2014 RE: WP34S: Reset when printing statistical registers? (07-03-2014 11:08 AM)walter b Wrote:  ... Anyway, Ralf, if you can reproduce it once again, please SAVE your data before. I'd be interested in whether "Restored" will show up instead of "Reset" then. ... Today I tried hard to reproduce it. First I interchanged some times the batteries of my both WP34S. Both have pairs of 3.02V each, with BATT displaying 3.0, but no way to reproduce the discussed behaviour. Then I took the batteries of my PR Error HP15C LE. Those batteries had both 2.91V, and a BATT value of 2.8 - and voilá! The printer managed to print only the first line, and even did not accomplish printing the broken I/O symbol, because of a shutdown/restart of the WP34S - which ended in "Restored". I hope this helps, Walter. Kind regards, Ralf. 07-04-2014, 05:15 AM Post: #26 walter b On Vacation Posts: 1,957 Joined: Dec 2013 RE: WP34S: Reset when printing statistical registers? (07-04-2014 04:37 AM)rkf Wrote:  I hope this helps, Walter. Thank you very much, Ralf. d:-) 07-04-2014, 05:31 AM (This post was last modified: 07-04-2014 06:14 AM by rkf.) Post: #27 rkf Member Posts: 52 Joined: Apr 2014 RE: WP34S: Reset when printing statistical registers? (07-03-2014 10:13 PM)jebem Wrote:  ... A couple of questions... - How far can the calculator be from the printer and still get a good connection? ... - I tried to measure this as exactly as possible. The maximum working distance I reached was 150 mm (for our U.S. readers: 5 29/32 inches, Happy Independence Day, BTW!) Edit: Just tested my HP42S with respect to Infrared Printing. The maximum working distance of my 1991 model is 1100 mm, thus a good deal further. Kind regards, Ralf. 07-04-2014, 07:49 AM Post: #28 walter b On Vacation Posts: 1,957 Joined: Dec 2013 RE: WP34S: Reset when printing statistical registers? Thank you for the calculations. We shouldn't forget that the WP 34S switches off for voltages <2.1V. d:-) 07-04-2014, 08:03 AM Post: #29 jebem Senior Member Posts: 1,260 Joined: Feb 2014 RE: WP34S: Reset when printing statistical registers? (07-04-2014 05:31 AM)rkf Wrote: (07-03-2014 10:13 PM)jebem Wrote:  ... A couple of questions... - How far can the calculator be from the printer and still get a good connection? ... - I tried to measure this as exactly as possible. The maximum working distance I reached was 150 mm (for our U.S. readers: 5 29/32 inches, Happy Independence Day, BTW!) Edit: Just tested my HP42S with respect to Infrared Printing. The maximum working distance of my 1991 model is 1100 mm, thus a good deal further. Kind regards, Ralf. Somehow I have similar experience here. I don't have a printer but I'm using my own IR receiver for testing purposes, and it seems I need to find a more efficient IR LED circuit for my 34S when I have the time and the mood for that. I have one "as-new-in-the-box" HP-42S got the previous week, and it can reach larger distances than my 34S. But there are several possible reasons for this different behavior to be investigated if and when time and mood allows it. Jose Mesquita 07-04-2014, 11:12 AM (This post was last modified: 07-04-2014 11:22 AM by pito.) Post: #30 pito Member Posts: 127 Joined: Jun 2014 RE: WP34S: Reset when printing statistical registers? (07-04-2014 07:49 AM)walter b Wrote:  Thank you for the calculations. We shouldn't forget that the WP 34S switches off for voltages <2.1V. d:-) I've taken a fresh CR2032 (Vinnic?) from my stock: Code: 1. Voltage without a load (actually 10Mohm, I = ~332nA) = 3.32V 2. Voltage with 200ohm load = 2.89V 3. Internal resistance of the battery is  Rint = dV@Rint/dI@Rint = (3.32V - 2.89V)/(2.89V/200ohm - 332nA) = 29.75 ohm So my above estimation was rather optimistic. Both batteries in parallel will have ~15ohm Rint when fresh. The load of ~15mA will create a ~0.22V drop with two fresh batteries in parallel. When measuring with BATT not loaded with the "nominal load", you will measure ~3.3V, what is of course a "Hausnummer" When 50-60% capacity consumed, a 10mA load (a calculation for example) will drop BATT below 2V (but the BATT will show you 3V). 07-04-2014, 06:51 PM Post: #31 Marcus von Cube Senior Member Posts: 754 Joined: Dec 2013 RE: WP34S: Reset when printing statistical registers? (07-03-2014 09:24 AM)pito Wrote:  Do we measure the battery voltage (BATT) under full load?? When not, the BATT does not show the proper numbers.. I agree on your conclusion but I can't help it. The battery monitor is more software than hardware. The brown out detection circuit is programmed to a threshold and signals if the current supply voltage is below or above the programmed value. To detect the actual voltage the software updates the threshold in regular intervals and monitors the result bit. The main problem is the time the circuit takes to stabilize after a change to the threshold. So the algorithm is: 1. Get the current bit value 2. Is the supply voltage below the threshold? If yes decrease the value, otherwise increase it. 3. Wait for the next interrupt to repeat the cycle. The time between interrupts is needed for the output to stabilize. If I want to measure under load I'd need to switch to high speed whenever I adjust the threshold value, i. e. all the time. This would deplete the batteries unnecessarily. You can measure the voltage under load by executing BATT in a programmed loop and let it run for a while. A running program is executed at full speed unless you set SLOW mode which roughly halves the speed. Marcus von Cube Wehrheim, Germany http://www.mvcsys.de http://wp34s.sf.net http://mvcsys.de/doc/basic-compare.html 07-04-2014, 07:29 PM (This post was last modified: 07-04-2014 07:41 PM by pito.) Post: #32 pito Member Posts: 127 Joined: Jun 2014 RE: WP34S: Reset when printing statistical registers? (07-04-2014 06:51 PM)Marcus von Cube Wrote: (07-03-2014 09:24 AM)pito Wrote:  Do we measure the battery voltage (BATT) under full load?? When not, the BATT does not show the proper numbers.. .. I agree on your conclusion but I can't help it. You do measure the battery with BOD regularly _after_ this chunk: Code:  ..if ( GoFast ) {                 /*                  *  We are doing serious work                  */                 if ( ++GoFast == 5 ) {                         speed = SPEED_HALF;                 }                 else if ( GoFast == 10 ) {                         speed = SPEED_HIGH;                         GoFast = 0;                 }         }     >>>>>>>>>>>>>>HERE YOU MEASURE BATT.. As we know quite well how a CR2032 behaves, you can predict the switching into SPEED_HALF (20mA) or SPEED_HIGH (40mA) will trigger the reset. So you may not to allow to switch into SPEED_HALF or SPEED_HIGH modi (based on the previous BATT values). That is maybe a small mod to the above code.. Just an idea.. 07-04-2014, 07:54 PM (This post was last modified: 07-04-2014 07:56 PM by Marcus von Cube.) Post: #33 Marcus von Cube Senior Member Posts: 754 Joined: Dec 2013 RE: WP34S: Reset when printing statistical registers? (07-04-2014 07:29 PM)pito Wrote:  As we know quite well how a CR2032 behaves, you can predict the switching into SPEED_HALF (20mA) or SPEED_HIGH (40mA) will trigger the reset. So you may not to allow to switch into SPEED_HALF or SPEED_HIGH modi (based on the previous BATT values). That is maybe a small mod to the above code.. Just an idea.. The speed switch itself is smart enough and does not allow SPEED_HIGH if the battery voltage is below a certain value. SPEED_HIGH is then treated the same as SPEED_HALF. The battery measurement algorithm is so slow that a single interrupt cycle makes no noticeable difference. Disallowing SPEED_HALF isn't an option in my view. This is a situation were a shutdown makes more sense. Marcus von Cube Wehrheim, Germany http://www.mvcsys.de http://wp34s.sf.net http://mvcsys.de/doc/basic-compare.html 07-04-2014, 08:34 PM Post: #34 pito Member Posts: 127 Joined: Jun 2014 RE: WP34S: Reset when printing statistical registers? (07-04-2014 07:54 PM)Marcus von Cube Wrote:  Disallowing SPEED_HALF isn't an option in my view. This is a situation were a shutdown makes more sense. Another idea - maybe a quick BATT measurement "under full load" upon switching the calculator "on" may provide an info on the "real" battery status. When it shuts down during that measurement the user knows what is going on.. When the value will be at the edge you may slow down and indicate.. 02-13-2015, 04:06 AM Post: #35 matthiaspaul Senior Member Posts: 385 Joined: Jan 2015 RE: WP34S: Reset when printing statistical registers? For easier cross-reference a similar thread can be found here: Greetings, Matthias -- "Programs are poems for computers." 02-13-2015, 01:00 PM Post: #36 ElectroDuende Member Posts: 139 Joined: Jul 2014 RE: WP34S: Reset when printing statistical registers? (07-03-2014 10:13 PM)jebem Wrote:  I get my batteries from China in the TAS for peanuts and the quality is more than acceptable for the asking price and free delivery. It's a long time since this post, I was reviewing HP34S issues and went along it, and I though it may be interesting to know this. Last year I got a pack of 20 CR2032 cells from a chinese auctioner, its expiration date is mid 2017. Some days ago one of my daugther toys with one of them in stopped working, and after opening I saw the battery had leaked (fortunately the toy was ok). When I opened my spare baterries box, I saw that some of the unused chinese CR2032, had also leaked in the package! So for future... no cheap chinese batteries in my calculators. « Next Oldest | Next Newest » User(s) browsing this thread: 1 Guest(s)
2018-07-20 06:51:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2917357385158539, "perplexity": 5014.578011455556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591543.63/warc/CC-MAIN-20180720061052-20180720081052-00333.warc.gz"}
http://mrhonner.com/
## Regents Recap — June 2014: What is an “Absolute Value Equation”? Here is another installment in my series reviewing the NY State Regents exams in mathematics. The following question appeared on the 2014 Integrated Algebra exam. My question is this:  what, exactly, is an “absolute value equation”?  According to the scoring key, the correct answer to this question is (2).  This suggests that the exam writers believe an “absolute value equation” to be some transformation of $y = |x|$. But “absolute value equation” is not a precise description of what the exam writers seem to be looking for.  It would be hard to argue that $y = |2b^{x}|$ is not an “absolute value equation”, but that appears to be the graph depicted in (1).  With some work, all the given graphs could be represented as equations involving absolute values (an exercise left to the reader). I doubt this imprecision caused any student to get this question wrong, but as I have argued again and again, these exams should stand as exemplars of mathematical precision.  These exams should not model imprecise language, poor notation, and improper terminology.  We do our students a great disservice by constantly asking them to guess what the exam writers were trying to say. ## Regents Recap — June 2014: When Good Math Becomes Bad Tests Here is another installment in my series reviewing the NY State Regents exams in mathematics. It is a true geometric wonder that a triangle’s medians always intersect at a single point.  It is a remarkable and beautiful result, and the fact that the point of intersection is the centroid of the triangle makes it even more compelling. This result should absolutely be a part of the standard Geometry curriculum.  It important and beautiful mathematics, it extends a fundamental notion of mathematics (symmetry) in new ways, and it is readily accessible through folding, balancing, compass construction, and coordinate geometry. But here’s what happens when high-stakes testing meets meaningful mathematics. This wonderful result has been reduced to an easy-to-test trick:  the centroid divides a median in a 2:1 ratio. It’s not hard to see how such a fact can quickly become an instructional focus when it comes to centroids:  if that’s how it’s going to be tested, that’s how it’s going to be taught.  Of course, teachers should do more than just teach to a test, but there’s a lot riding on test results these days, and it’s hard to blame teachers for focusing on test scores when politicians, policy makers, and administrators tell them their jobs depend on it. This is just one example of many, from one test and one state.  This is an inseparable component of standardized testing, and it can be found in all content areas and at all levels.  And for those who argue that the solution is simply to make better tests, keep this in mind:  New York has been math Regents exams for over eighty years.  Why haven’t we produced those better tests yet? ## Exploring Correlation and Regression in Desmos I’ve created an interactive worksheet in Desmos for exploring some basic ideas in correlation and regression. In the demonstration, four points and their regression line are given.  A fifth point, in red, can be moved around, and changes in the regression line and correlation coefficient can be observed. The shaded region indicates where the fifth point can be located in order to make (or keep) the correlation among the five points positive.  The boundary of that region was a bit of a surprise to me! You can access the worksheet here.  Many interesting questions came to mind as I built and played around with this, so perhaps this may be of value to others.  Feel free to use and share! ## Math Photo: Grey Geometry I like the many shades of grey, and the many shades of conic section, present in this spire. ## Math and Dinner — NYMC I’ll be giving a Math and Dinner talk for the New York Math Circle on Monday, August 4.  Math and Dinner starts with some interesting mathematics at NYU’s Courant Institute, and after the talk the conversation continues over dinner at a nearby restaurant.  The series provides a fun and casual way for NYMC participants to chat about mathematics, teaching, and learning. My talk is titled An Array of Matrix Explorations.  Here is the description. The world of matrices is rich and diverse, connecting ideas across disparate disciplines and extending familiar mathematical notions into unfamiliar territory. In this talk, participants will explore some common concepts in the high school curriculum–algebra, geometry, and trigonometry–through the mathematics of matrices, where their depth and connections can be seen in exciting ways. The talk is free, but participants pay their own way for dinner.  You can find out more about the series, and register, here.
2014-07-30 19:12:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.435992956161499, "perplexity": 2337.3111187823774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510271648.4/warc/CC-MAIN-20140728011751-00269-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.oecd-ilibrary.org/sites/b2d1417b-en/index.html?itemId=/content/component/b2d1417b-en
# Annex A. Brief description of the PEM-Norway model The OECD Policy Evaluation Model (PEM) is an equilibrium displacement model that contains explicit product and factor markets (see (OECD, 2005[1]; OECD, 2015[2]) for further details on the general structure of the model). These markets, which inter alia include land, chemicals and fertiliser use, provide a direct connection between economic policy, farm activities and their environmental consequences, in particular as regards to water pollution and climate change. PEM Norway distinguishes four outputs and 13 inputs. The outputs are wheat, coarse grains (barley and oats), milk and beef. Milk is processed further into fluid milk (e.g. drinking milk, yoghurt, cream) sold on the domestic market only and industrial milk (e.g. cheese, milk powder, butter) with is sold on both the domestic market and the international market. No factor is assumed to be completely fixed in production, but land and other farm-owned factors are assumed to be relatively more fixed (have lower price elasticities of supply) than the purchased factors. There are three farm-owned factors: land, cows, and a residual “other farm owned factors”. The representation of the land market allows simulating payments based on area, payments based on non-current area (historical entitlements), and farm income. The set of purchased factors cover fertiliser, chemical use, interest, irrigation, feed, machinery and many others. The PEM model for Norway follows the regionalisation used in the Norwegian Farm Accountancy Register and divides Norway into five regions. It is a stand-alone version of the model that takes world market prices as given and assumes that domestic market prices are fixed via negotiations with producer groups. Domestic consumer prices only adjust when needed to clear markets to avoid additional subsidised exports. The model is calibrated to the situation in Norway in 2017. Norwegian agriculture has four major objectives: Food security, agriculture all over the country, value creation, and sustainability with lower greenhouse gas (GHG) emissions. For each objective, multiple indicators are produced using the model results. Indicators for food security include self-sufficiency (on a calorie basis), farm land per 1 000 capita and cows per 1 000 capita. The energy content of food used in the calculations is 2 920 kcal per kg wheat, 703.5 kcal per kg milk and 1 697 kcal per kg beef. The numbers are taken from the agricultural sector model Jordmod (Mittenzwei, 2018[3]). The population in 2017 was 5.258 million inhabitants (SSB, 2020[4]). The objective agriculture all over the country is reflected in 13 indicators. Seven measure the share of various variables in central regions: overall land use, land use to wheat, land use to grains, land use to milk, land use to beef, milk production, and beef production. Eastern lowlands and Jæren are considered central regions with the best natural and climatic conditions for agriculture. Land use at the regional level make up five additional indicators, while the last indicator measures overall land use compared to the baseline. This indicator is meant to cover whether land use in a region changes overall. Value creation is measured via farm incomes and productivity. Fifteen productivity indicators are included, which differ with respect to the measurement of outputs and inputs. In general, productivity measures output in relation to input. The following formula has been used: $\left(\frac{{Output}_{1}}{{Output}_{0}}-1\right)-\left(\frac{{Input}_{1}}{{Input}_{0}}-1\right)$ where superscripts 1 and 0 indicate the scenario and the baseline, respectively. Inputs and outputs are valued at base year prices both in the scenarios and in the baseline. Total factor productivity measures productivity growth of all outputs and all inputs. Productivity indicators are presented for each of the four outputs (wheat, coarse grains, milk, and beef) for all inputs, purchased inputs and farm-owned inputs. The fourth policy objective, sustainability with reduced greenhouse gas emissions, is reflected in indictors on GHG emissions, nutrient balances, and selected aspects of cultural landscapes. The parameters to calculate gaseous emission and nutrient balances are adapted from data for Switzerland (OECD, 2015[2]). These parameters are specific to plains, hilly, and mountainous areas. The parameters for plains regions are applied to the eastern lowlands and Jæren. The central lowlands and the southern valleys are associated with hilly areas, while the parameters for mountainous areas are used for northern Norway. Nitrogen balances and phosphorus balances measure inputs and outputs of the two nutrients from all sources and are calculated on a regional basis. The data used to construct the environmental indicators come from the OECD AEI database along with additional calculations to disaggregate the environmental indicators to match each commodity covered by PEM. The N and P balance indicators were constructed following OECD and EUROSTAT guidelines (Eurostat, 2013[5]; OECD, 2013[6]). Greenhouse gas emissions in CO2-equivalents are produced using a conversion factor of 25 for methane and 298 for nitrous oxide following the AR4-report of the IPCC (GWP100). The GHG emission calculations were based on the national GHG inventory methods outlined in IPCC (2006), with the Tier 1 approach used to calculate N2O emissions from crops, and the Tier 2 approach used to calculate all other GHG emission sources. Two environmental indicators shed light on aspects of cultural landscapes: the livestock density defined as the number of animals per unit of land devoted to milk and beef production, and grassland as a share of total land use. Both indicators are calculated at the regional level. The data for the Norwegian version of the PEM model have been taken from many different sources. The OECD PSE-database, the OECD Aglink model, the Norwegian driftsgranskinger (i.e. counterpart to the EU Farm Data Network FADN) (Kristiansen, 2018[7]), the direct payment register of the Norwegian Agriculture Agency (2020) and a tool to calculate payments at the individual farm level (Mittenzwei, 2018[3]) have been most important. The PEM model distinguishes five regions as presented in Chapter 1: eastern lowlands; Jæren; central lowlands; southern valleys; and northern Norway. The five regions are chosen to capture regional policy and geographic differences in order to provide a coherent analysis of the regional impact of policy reforms. Each region is relatively homogenous with respect to climatic and natural conditions for agriculture. Regions coincide with the regionalisation of the Norwegian driftsgranskinger and allow a straightforward calculation of factor shares for the major types of agricultural production. The regions also largely match the zones that exist for regionally differentiated payments in Norwegian agriculture. Regional production volumes come from the price subsidy register of the Norwegian Agricultural Agency that collects data from dairies, slaughterhouses, and mills for the administration of regionalised output payments. Production volumes for processed products are taken from Norwegian Agriculture Agency that collects data on processed raw milk into different dairy products in connection with the administration of the milk price equalisation scheme. Domestic and international prices stem from the PSE database. Regional differences in output prices are insignificant and administrative prices are negotiated between the farmers’ organisations and the government. Land use for wheat and coarse grains stems from the direct payments register. That register also contains data for fodder on arable land, surface-cultivated land and fenced pastures. It is assumed that 80% of that land is devoted to milk and beef. That factor is taken from the base year of the Norwegian agricultural sector model Jordmod (Mittenzwei, 2018[3]). It is further assumed that milk and beef occupy the same amount of grassland per animal unit. In sum, PEM covers about 80% of the utilised agricultural area in Norway. The inputs are farm-owned capital, cows, land, concentrated feed, machinery and equipment, hired labour, chemicals, energy, fertiliser, insurance, irrigation, other purchased inputs and interests. The zero-profit condition in PEM facilitates that factor shares are sufficient to calibrate the model to the base year. Factor shares are calibrated from the driftsgranskinger using the economic size unit of a farm as a selection criterion to identify a sufficiently large sample of representative farms for the four productions in the five regions. For grains and milk, farms with more than 99% and 80% of their total ESU from grains and milk, respectively, were selected. Beef production in Norway takes place either in combination with dairy cows or separate (i.e. suckler cows). Dairy farms with a share of more than 150% of their economic value from beef relative to milk and suckler cow farms with more 66% of their total economic value from beef are defined as beef farms. More detailed information on this data source can be found in (Mittenzwei, 2020[8]). There is no distinction between factor shares for wheat and coarse grains. The shares for each production in each region are calculated based on the unweighted average of the inputs of the farms (Mittenzwei, 2020[8]). The costs for feed concentrates, chemicals, energy, fertiliser, hired labour, insurance, machinery and equipment, irrigation and interests are taken directly from the farm accounts. The cost for land is calculated as the sum of own land and rented land multiplied by the price of rented land. The cost for cows is half the value of cows in the balance and multiplied by the stipulated interest rate for debt. The cost of farm-owned input is calculated as a residual using the zero-profit condition. Farm-owned input is defined as the sum of market revenue and payments minus all other input costs. There are many different budgetary payments in Norway. PEM covers the most important of these, as well as milk quota. Certain legal and regulatory constraints are also built into the model structure. Most importantly, the Soil Act requires all arable land, surface-cultivated agricultural land, and fenced pastures to be kept in food production. The aim of the Act is to produce food, maintain the soil’s production capacity and to keep up the agricultural landscape. Less than 1% of the utilised agricultural area is denoted “out of production” in the direct payment register (Norwegian Agriculture Agency, 2020[9]). The standard procedure in PEM is to take payment information directly from the PSE-database. This is not adequate in the case of Norway. In addition to the regionalised nature of payments, there is also a farm structural component in the payment system. This means that payment rates are higher for the first animals than for later animals. In other words, per unit payment rates are negatively correlated with farm size. The rationale is to incentivise farmers not to fully exploit economies of scale. The payment rates in the Norwegian version of PEM are therefore based on a detailed calculation of the most important payments into six payment groups for all active farms in Norway (Mittenzwei, 2018[3]). The payments within each of the six Norwegian payment groups are linked to single types of support in the PSE database (Table A A.4.). Output payments coincide with output payments in the PSE database. Income support to dairy farms is a scheme where only the first five dairy cows and the first 40 suckler cows of a farm receive support. This payment is categorised as a payment based on non-current animal number with production required, because virtually all dairy farms in Norway have more than five dairy cows. Acreage payments are split between payments based on current area and payments based on non-current area. The latter cultural landscape payment is provided with a uniform payment rate for all crops in all regions. Animal payments and welfare payments belong to the category of payments based on current animal numbers where production is required. Finally, “other payments” contain all payments that cannot be linked directly to the most prominent land uses or animal numbers. Investment support, organic payments, income tax deduction, and fuel concession are examples of those payments. The payment amounts in Table A A.4. show the payments for Norwegian agriculture. As the PEM model includes only wheat, coarse grains, milk and beef, payment amounts need to be adjusted to account for that selection. The PEM model covers about 78% of the total payment amount or NOK 10 807 million. Income support for dairy farmers is slightly higher than reported in the PSE database; the reason may lie in additional payment regulations that are not covered in the calculations for the individual farms. The four productions included in PEM account for about half of all output payments. They allocate also almost all acreage payments and two-thirds of the animal payments. Half of the welfare payments can be traced to milk and beef, while nearly all other payments are related to the four productions. The regional profile of the payment system is clearly visible with lowest payment rates in Jæren and highest payment rates in northern Norway for most payment categories. Regional differences in payment rates are smaller for crop products (Mittenzwei, 2020[8]). ## References [5] Eurostat (2013), “Methodology and Handbook Eurostat/OECD. Nutrient Budgets, EU-27, Norway, Switzerland”, No. 112, Eurostat, Luxembourg. [7] Kristiansen, B. (2018), “Driftsgranskingar i jord- og skogbruk. Rekneskapsresultat 2017”, NIBIO Bok 5(10), https://nibio.brage.unit.no/nibio-xmlui/handle/11250/2580084 (accessed on 4 August 2020). [8] Mittenzwei, K. (2020), PEM model for Norway. Background report for the Agriculture Review. [3] Mittenzwei, K. (2018), Økonomisk modellering av klimatiltak i jordbruket: Dokumentasjon og anvendelser i CAPRI og Jordmod. Versjon 1.0 av 30.04.2018, https://nibio.brage.unit.no/nibio-xmlui/handle/11250/2496992 (accessed on 4 August 2020). [9] Norwegian Agriculture Agency (2020), Om produksjonstilskudd - Landbruksdirektoratet, https://www.landbruksdirektoratet.no/no/produksjon-og-marked/produksjonstilskudd/om-produksjonstilskudd (accessed on 4 August 2020). [2] OECD (2015), OECD Review of Agricultural Policies: Switzerland 2015, OECD Review of Agricultural Policies, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264168039-en. [6] OECD (2013), OECD Compendium of Agri-environmental Indicators, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264186217-en. [1] OECD (2005), The Six-Commodity PEM Model: Preliminary Results, OECD, Paris, https://one.oecd.org/#/document/AGR/CA/APM(2005)30/en?_k=pgr4y0 (accessed on 8 March 2018). [4] SSB (2020), Population, by sex and one-year age groups, https://www.ssb.no/en/statbank/table/11727 (accessed on 4 August 2020).
2021-09-23 06:32:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37080445885658264, "perplexity": 6018.293259275389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057417.10/warc/CC-MAIN-20210923044248-20210923074248-00650.warc.gz"}
https://help.bynder.com/en/442268-on-the-fly-generator--beta-.html
Knowledge Base On the Fly Generator (Beta) Would you like to generate customized derivatives on demand and use them in your external systems? With the On the Fly (OTF) Generator you can request a customized derivative by adding parameters to OTF derivative URL. By using OTF derivatives that are optimized for the platform and/or device your users are working on you can ensure that they have the best user experience possible. The On the Fly Generator is fully integrated into your Bynder Dam making sure you don't need to rely on external systems for OTF generation. Find out below how you can start integrating OTF derivatives into your external websites, social media platforms and email newsletters. How to Enable the On the Fly Generator? Inform your Customer Success Manager that you would like to participate in the Beta program. Your Customer Success Manager will be happy to further assist you in setting this up. How to Create On the Fly (OTF) Derivatives? You can create OTF derivatives by adding parameters to the OTF derivative URL. 1. Open the asset for which you want to create an OTF derivative. 2. Copy the OTF derivative URL in the Link to public files section. This URL functions as the base URL to which you'll add your transformations. 3. Create your customized OTF derivative by adding one of the transformations indicated in the table below directly behind the OTF derivative URL. Resize Transformations Transformation Description Parameters Resize Use this transformation to set the exact width and height of the OTF derivative. This operation will change the aspect ratio if the original image has a different aspect ratio. &io=operation:resize,width:400,height:400 Resize with defined output file type Use this transformation to set the width, height and file type for the OTF derivative.This operation will change the aspect ratio if the original image has a different aspect ratio.Valid output types: png, jpeg & webp &io=operation:resize,width:400,height:400&output=png Resize by width Use this transformation to set the width of the OTF derivative. The height will be set automatically respecting the aspect ratio of the original image. &io=operation:resize,width:100 Resize by height Use this transformation to set the height of the OTF derivative. The width will be set automatically respecting the aspect ratio of the original image. &io=operation:resize,height:100,width:auto Resize by height and set aspect ratio Use this transformation to set the height of the OTF derivative and a custom aspect ratio. The width will be set automatically respecting the custom aspect ratio.If the custom aspect ratio differs from the aspect ratio of the original image the OTF derivative will be distorted. &io=operation:resize,height:100,width:auto,aspectratio:1.77 Resize while maintaining aspect ratio Use this transformation to resize the OTF derivative to a maximum width or height. The smallest value will be used.The other dimension will be set automatically respecting the aspect ratio of the original image. &io=operation:resize,maxwidth:100,maxheight:50 Manual resize with new aspect ratio Use this transformation to resize the OTF derivative to a maximum width or height and to set a custom aspect ratio.The value of the smallest dimension will be used for resizing. The other dimension will be set automatically respecting the custom aspect ratio.If the custom aspect ratio differs from the aspect ratio of the original image the OTF derivative will be distorted. &io=operation:resize,maxwidth:100,maxheight:100,aspectratio:1.77 Resize extending the canvas with a solid colour Use this transformation to resize the image to the specified width and height as well as to fill the canvas with a solid colourSupported colors: red, green, blue, yellow, white, black &io=operation:resize,width:100,height:100,extend:true,background:blue Resize + Crop Transformations Transformation Description Parameters Crop from the center Use this transformation to crop an area of x by y pixels from the center of the image. &io=operation:crop,width:400,height:400 Crop from an x and y coordinate Use this transformation to crop an area of x by y pixels starting from a specified position using x and y coordinates. The coordinates of the top left corner of the image are x=0 & y=0 &io=operation:crop,width:400,height:200,x:10,y:0 Crop from a focal point Use this transformation to crop an area of x by y pixels starting from the focal point.The focal point is defined in the asset detail view and uses the x and y coordinates. In the example the x=50 and y=40 &io=operation:crop,width:100,height:100&focalpoint=50,40 Resize and crop maintaining aspect ratio Use this transformation to resize the OTF derivative to a specified width and height.Any extended width and height will be cropped in order to maintain the aspect ratio of the original image. &io=operation:resize,width:400,height:400,crop:true Resize and crop bottom maintaining aspect ratio Use this transformation to resize the OTF derivative to a specified width and height.Any extended space at the bottom of the image will be cropped in order to maintain the aspect ratio of the original image. &io=operation:resize,width:400,height:200,crop:true,valign:top Resize and crop top and bottom maintaining aspect ratio Use this transformation to resize the OTF derivative to a specified width and height.Any extended space at the top and bottom of the image will be cropped in order to maintain the aspect ratio of the original image. &io=operation:resize,width:400,height:200,crop:true,valign:middle Resize and crop top maintaining aspect ratio Use this transformation to resize the OTF derivative to a specified width and height.Any extended space at the top of the image will be cropped in order to maintain the aspect ratio of the original image. &io=operation:resize,width:400,height:200,crop:true,valign:bottom Resize and crop right maintaining aspect ratio Use this transformation to resize the OTF derivative to a specified width and height.Any extended space at the right side of the image will be cropped in order to maintain the aspect ratio of the original image. &io=operation:resize,width:400,height:200,crop:true,halign:left Resize and crop right and bottom maintaining aspect ratio Use this transformation to resize the OTF derivative to a specified width and height.Any extended space at the right and bottom of the image will be cropped in order to maintain the aspect ratio of the original image. &io=operation:resize,width:400,height:200,crop:true,valign:top,halign:left Resize and crop right, left and bottom maintaining aspect ratio Use this transformation to resize the OTF derivative to a specified width and height.Any extended space at the right, left and bottom of the image will be cropped in order to maintain the aspect ratio of the original image. &io=operation:resize,width:400,height:200,crop:true,valign:top,halign:middle Resize and crop left and bottom maintaining aspect ratio Use this transformation to resize the OTF derivative to a specified width and height.Any extended space at the left and bottom of the image will be cropped in order to maintain the aspect ratio of the original image. &io=operation:resize,width:400,height:200,crop:true,valign:top,halign:right Resize and crop top, bottom and right side maintaining aspect ratio Use this transformation to resize the OTF derivative to a specified width and height.Any extended space at the top, bottom and right of the image will be cropped in order to maintain the aspect ratio of the original image. &io=operation:resize,width:400,height:200,crop:true,valign:middle,halign:left Resize and crop right, left, top and bottom maintaining aspect ratio Use this transformation to resize the OTF derivative to a specified width and height.Any extended space at the right, left, top and bottom of the image will be cropped in order to maintain the aspect ratio of the original image. &io=operation:resize,width:400,height:200,crop:true,valign:middle,halign:middle Resize and crop top, bottom and left maintaining aspect ratio Use this transformation to resize the OTF derivative to a specified width and height.Any extended space at the top, bottom and left of the image will be cropped in order to maintain the aspect ratio of the original image. &io=operation:resize,width:400,height:200,crop:true,valign:middle,halign:right Resize and crop top and right maintaining aspect ratio Use this transformation to resize the OTF derivative to a specified width and height.Any extended space at the top and right of the image will be cropped in order to maintain theaspect ratio of the original image. &io=operation:resize,width:400,height:200,crop:true,valign:bottom,halign:left Resize and crop top, right and left maintaining aspect ratio Use this transformation to resize the OTF derivative to a specified width and height.Any extended space at the top, right and left of the image will be cropped in order to maintain theaspect ratio of the original image. &io=operation:resize,width:400,height:200,crop:true,valign:bottom,halign:middle Resize and crop top and left maintaining from aspect ratio Use this transformation to resize the OTF derivative to a specified width and height.Any extended space at the top and left of the image will be cropped in order to maintain theaspect ratio of the original image. &io=operation:resize,width:400,height:200,crop:true,valign:bottom,halign:right
2021-01-16 02:42:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36106419563293457, "perplexity": 1059.2593794529453}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703499999.6/warc/CC-MAIN-20210116014637-20210116044637-00437.warc.gz"}
http://mathoverflow.net/questions/45648/two-geometric-probability-questions-one-answered-one-more-to-go
# Two geometric probability questions (one answered, one more to go) 1. Given $n$ independent uniformly distributed points on $S^2$, what's the distribution of the distance between two closest points? 2. Consider $n$ iid uniform points on $S^1$, $Y_1, \ldots, Y_n$, in counterclockwise order. Now let $I_1 = Y_2-Y_1, \ldots, I_n = Y_1 - Y_n$ be the spacings between consecutive points. Finally order the spacing sequence into $I_{(1)} < I_{(2)} < \ldots < I_{(n)}$. They will also generate a spacing sequence, of size $n-1$, $J_1 = I_{(2)} - I_{(1)}, \ldots, J_{n-1} = I_{(n)} - I_{(n-1)}$. What's the distribution of this last sequence? In particular, what's the mean value of the smallest $J$ and largest $J$? - For the first problem, I think you just fix one of the points to be the north pole, and look at surface areas of caps. – Eric Tressler Nov 11 '10 at 5:53 The answer will depend on whether you're talking about chord distance or distance on the surface, though. – Eric Tressler Nov 11 '10 at 5:55 These two are essentially the same, aren't they? – John Jiang Nov 11 '10 at 6:31 I mean there is simple formula relating the two, so the distribution of one would be a simple transform of the other. – John Jiang Nov 11 '10 at 6:32 There is an asymptotic formula for the minimal spherical distance when $n$ is large (see e.g. the PhD thesis "Random Diameters and Other U-Max-Statistics" by M. Mayer, Corollary 3.37): Theorem. Assume that the points $\xi_1,\xi_2,...,\xi_n$ are independent and uniformly distributed on $\mathbb S^{d−1}$. Let $S_n$ be the smallest central angle formed by point pairs within the sample. Then for $t > 0$ $$P\{n^{2/(d-1)}S_n\leq t\}=1-\exp\left(-\frac{\Gamma(\frac{d}{2})}{4\pi^{1/2}\Gamma(\frac{d+1}{2})}t^{d-1} \right)+\mathcal O(n^{-1}).$$ I am not sure if there is a nice explicit formula for finite $n$. In fact, the knowledge of the exact form of the distribution $P\{S_n\leq\theta\}$ on $\mathbb S^2$ would lead to a solution of the Tammes packing problem (which is only solved for a few values of $n$ to the best of my knowledge). - 2) This is, of course, the same as saying about spacings between uniform points on a segment (you can say that $Y_1=0$, for example). Let it be the segment $[0,1]$. Now the joint distribution of $I_1,\dots, I_{n}$ is the same as of $E_1/E,\dots, E_n/E$, where $E_1,\dots, E_n$ are iid exponential distributed, $E=\sum_{k=1}^n E_k$ (see Devroye Non-Uniform Random Variate Generation, p.208). So the distribution of $I_{(1)},\dots, I_{(n)}$ is the same as of $E_{(1)}/E,\dots, E_{(n)}/E$. But the joint distribution of $\{E_{(k)}-E_{(k-1)},k=1,\dots,n\}$ ($E_{(0)}:=0$) is the same as of $\{(n-k+1)^{-1} E_k,k=1,\dots,n\}$ (ibid, p.211). So the distribution of $J_1,\dots, J_n$ is the same as of $\{(n+k-1)^{-1} E_k/E,k=1,\dots,n\}$, where $E_1,\dots, E_k$ are iid exponential rv's, $E=\sum_{k=1}^n E_k$. And this is, by the previous paragraph, equivalent to saying that the distribution is the same as of $\{(n-k+1)^{-1} I_k,k=1,\dots,n\}$. These are not independent, but very close to be, and from here you can find the distribution of maximum and minimum (but nothing very pleasant there, as the variables in question are not identically distributed; a formula for the expectation looks extremely ugly). How to get distribution of $J$ omitting $E$. In fact, this is simple owing to the fact that the ordering map on the simplex $\{(t_1,\dots,t_n)|t_j\ge 0,\sum_j t_j=1\}$ (the support of $I$) is picewise linear, and moreover each image has the same number of preimages due to the apparent symmetry. So the distribution of $\{I_{(1)},\dots,I_{(n)}\}$ is uniform on its support. Now we have a one-to-one linear map to $J$. So $J$ is also uniformely distributed. So it's only about finding its support, which is simple, as John Jiang noted. - Thank you for the great answer. I am always scared of exact formulas. – John Jiang Nov 12 '10 at 6:16 @John Jiang: You're warmly welcome. Precise formulas useless here, if you want, I can look into asymptotics. – zhoraster Nov 12 '10 at 8:01 @zhoraster: actually I was able to use the formula you gave above to compute the exact distribution of the minimum: $P(\min J_i > y) = (1-y n(n+1)/2)^{n-1}$, so it doesn't seem bad at all. What I did is a pretty geometric argument. Notice that $y$ ranges between $0$ and $2/(n(n+1))$, as expected from its being the smallest gap of gaps of $n$ points on the circle. Using that formula, we just need to integrate $P(\min J_i > y)$ for $y \in [0,2/(n(n+1))]$ to get the expected value, which is $2/((n-1)n(n+1))$. – John Jiang Nov 12 '10 at 9:26 @John Jiang: I see, I had another formula initially (from $E_k/E$), just didn't see that it can be reduced to this. I even discovered now that it is quite straightforward to go from the distribution of $I$ to the one of $J$ omitting $E$. Anyway, glad that you've found it, congratulations! – zhoraster Nov 12 '10 at 10:09 Thanks again! Your geometric ansatz really helped. I'd be glad to hear how to go directly from I to J, as I am still interested in the entire J sequence, and hopefully prove something nice about them. – John Jiang Nov 12 '10 at 16:41
2015-11-29 11:05:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9446788430213928, "perplexity": 220.412647943077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398457697.46/warc/CC-MAIN-20151124205417-00174-ip-10-71-132-137.ec2.internal.warc.gz"}
http://quadri-canvas.it/poou/quantities-in-chemical-reactions-lab-report.html
Mix thoroughly until fizzing and evolution of gas ceases. percent yield of the reaction determined. Lecture: Part III 8. 001 grams of copper (II) sulfate, CuSO4, and 2. 0 g of CH 4. 2 (CB 7-9) Online Lab: Mole Lab 100 Worksheet B (long) pg 14-17 Mole Video (Dr. the number of moles and the volumes B. The coefficients of the chemical equation show the relative amounts of substance needed for the reaction to occur. The coefficients in a. A buffer solution is one which resists changes in pH when small quantities of an acid or an alkali are added to it. The user controls the action of a piston in a pressure chamber filled with an ideal gas, illustrating relationships between temperature, volume, pressure, and molecular weight. of chemical reactions. Simultaneous Determination of Several Thermodynamic. The mole will allow us to study in greater detail chemical formulas and chemical reactions. Make, record, and report experimental observations. (The patient ultimately survived. Authors: D. To examine a variety of reactions including precipitation, acid-base, gas forming, and oxidation-reduction reactions. Use lab data to calculate molar mass of an unknown metal. Textbook: Brown, LeMay, Bursten, Chemistry The Central Science, 2005, 10 ed. • Your post-lab report must be typed using the Microsoft Word files provided on the website. The headings should be copied and underlined exactly as this format shows. (b) An experiment was carried out to determine the value of the equilibrium constant, K c, for the above reaction. The product was placed in an ice bath to lower the temperature before beginning the later reactions, so the results recorded would be more accurate. The report addresses: (1) the quantities of chemical substances manufactured, imported, used as a reactant, used in industry and consumer products, or lost to the environment; and (2) worker exposure. percent yield of the reaction determined. To incorporate graphing exercises involving mass relationships into lab activities, three simple actions are necessary: (i) the quantities of reactants that have undergone transformation and the quantities of products that have been made as a result of the chemical reaction must be directly measured or deduced, (ii) the mass of reactants in the. The rate of a chemical reaction is the change in the concentration of a reactant or a product per unit time. It produced the most gas because a chemical reaction occurred. You will discover in the course of your lab the varying effects of curing time on the strength of concrete. 5 - Molarity; 7. Chemical Equilibrium. the number of molecules and the volumes C. Chemistry lab: 1 page report, double spaced, times roman, 12 inch font, with at least 2 or more references. Organic reactions often yield a number of by-products, some inorganic, some organic. Examples of qualitative tests would include ion precipitation reactions (solubility tests) or chemical reactivity tests. Remember to write the phase [(g),(l),(s), or (aq)] after each reactant and product. Limiting Reactant Lab: Discuss the concepts of limiting reactant, theoretical yield and percent yield. 9 chem reactions is to recognize any condition or read online for students report abuse. This appendix presents Laboratory Chemical Safety Summaries (LCSSs) for 88 substances commonly encountered in laboratories. 17 Homework-Lab write-up (lab quiz next class) Lesson 3. The last reaction, the decomposition of potassium chlorate, includes manganese(IV) oxide (MnO2) as a catalyst. Students will identify variables in an experiment; design and carry out an experiment, collect and analyze their data; and draw conclusions based on their. All proper chemical equations are balanced. The glossary below cites definitions to know when your work calls for making these and the most accurate molar solutions. Thus, 0 g of O 2 remains. The reactions will either give off heat or absorb heat. So the assignment had us. iii) To gain an understanding of the periodic table as an organizing concept of chemical properties. The reaction basically involves dissolving borax Na 2 B 4 O 7. Dealing with quantities is very important in chemistry, as is relating quantities to each other. Lab reports must be typeset in a word-processing program. Pablo Gonzalez 2,953 views. Italicized font represents information to be shared orally or physically completed with the students at this time. In this lab, we carried out a textbook example of the Suzuki reaction, coupling an aryl bromide with an arylboronic acid to produce a. compound as chemical potential energy. Work with different sized plastic bottles, too. The fact that a chemical reaction occurs means that the system is not in equilibrium. Why Do Helium Balloons Deflate? What Is an Experiment? Definition and Design. Jose popoff 11th grade 9C Stoichiometric Relationships Report November 2th 2012 Introduction Stoichiometry is a branch of chemistry that deals with the relative quantities of reactants and products in chemical. You will be graded on your accuracy. The copper first underwent a redox reaction with nitric. CHEMICAL SAFETY REPORT. Answer In-Lab Questions #1 and #2 on page E4C-5. Guidelines for a Physics Lab Reports A laboratory report has three main functions: (1) To provide a record of the experiments and raw data included in the report, (2) To provide sufficient information to reproduce or extend the data, and (3) To analyze the data, present conclusions and make recommendations based on the experimental work. A catalyst is a chemical with a purpose to speed up the rate of reaction, without it being used up in the chemical changes that take place (Factors Affecting Rate of Reaction, pg. In your own words, state the Law of Conservation of Mass. Using a balanced chemical equation to calculate amounts of reactants and products is called stoichiometry. the number of moles and the volumes B. THE DECOMPOSITION OF POTASSIUM CHLORATE This lab is derived almost entirely from a lab used at the United States Naval Academy PURPOSE: The purpose of this experiment is to study the decomposition of potassium chlorate, both by verifying the identity of one of the products and by quantitatively determining the correct stoichiometry. Students will be able to observe chemical changes: one in which a gas is produced and a change in temperature occurs, and one in which the substance is visibly changed. Only 3/16 moles were used in the reaction. The main purpose of a scientific report is to communicate the finding from the work and to help the reader to understand them. The OSHA Lab Standard (29 CFR 1910. Basic theory for a gas chromatography lab report: the overview of the main principles Nowadays, it is impossible to imagine a chemical laboratory without a gas chromatograph. We are now going to delve into the heart of chemistry. 7 6 5 8 9/24 9/25 Lab Assessment, Acid-Base Reactions 4. • Your post-lab report must be typed using the Microsoft Word files provided on the website. A virtual lab from the University of Oregon allows one to perform three experiments. Solutions Th. To examine a variety of reactions including precipitation, acid-base, gas forming, and oxidation-reduction reactions. ") How does a thermometer read temperature?. 02 X 1023) of particles. Chemistry plays a great part in every day of our life. Through experimenting with quantities of reactants and temperature, students are lead to write claims about how these variables affect the rate of the reaction. The coefficients of the chemical equation show the relative amounts of substance needed for the reaction to occur. Purpose: The purpose of this experiment is to demonstrate a cycle of reactions involving copper. Qualitative analysis is a method used for identification of ions or compounds in a sample. Flask 4 will produce only the same amount of hydrogen as Flask 3 and have excess Mg left over, since the reaction is limited by the HCl. You will perform a few preliminiary experiments to become acquainted with the observations in this experiment so that you will know what to expect in the reactions. Lab #3 - Periodic Trends - Density. Probably the most common application in biology of this technique is in the measurement of the concentration of a compound in solution. PS-5 investigate chemical reactions and the release or consumption of energy. 3 - Limiting and Excess Quantities; 7. In this lab, we will be using a chemical test to determine the purity of the aspirin you synthesized. Chemistry Calculators Online. Eric Cotton, Ph. The development and implementation of a chemical hygiene plan (Laboratory Safety and Chemical Hygiene Plan) is a central requirement of the “Laboratory Standard” ( 1910. The reactions are: Ag+- + Cl ł AgCl(s) 2Ag+2 + CrO 42 -ł Ag CrO 4 (s) By knowing the stoichiometry and moles consumed at the end point, the amount of chloride in an unknown sample can be determined. The bottle is thrust forward as the rushing foam and gas shoot backward. Pre-Lab Questions: 1. You will be handing in the lab report but answering only the questions about the ester you created. The balanced chemical reaction may be used to answer. The rate of a reaction is the change of concentration taking place per change in time. Chemistry 112 Laboratory Experiment 7: Determination of Reaction Stoichiometry and Chemical Equilibrium Introduction The word equilibrium suggests balance or stability. Chemical Changes- To learn that the formation of gas bubbles is an indication of a chemical change. 0 M acetone and water using a 10. Combustion – An element reacts with oxygen to form an oxide (this is also a Synthesis) A compound consisting of carbon, hydrogen and/or oxygen (C8H18 is called octane) reacts with oxygen to form carbon dioxide and water. Several factors affect the rates of reactions. Decomposition chemical reaction is the reaction where only one compound decomposes and results in two or more than two products. For chemical reactions, one can define a "standard" enthalpy of reaction by specifying "standard" initial and final states of the reacting system. When these laws are applied to chemical reactions, they relate the equilibrium condition (whether the reaction is proceeding in the forward or reverse direction) and the temperature of the reaction to the thermodynamic functions, namely change in enthalpy (∆H), change in entropy (∆S), and change in free energy (∆G). Solid zinc sulfide reacts with oxygen in the air. What products form in the complete combustion of a hydrocarbon? 3. Full Lab Report Due: Nov. 4 - Percent Yield; 7. Entropy: Red and white Dice are used to investigate probability and microstates. The color change occurs when I2 reacts with starch to form a dark blue iodine/starch complex. –Just like following a recipe, certain ingredients are combined (and often heated) to form something new. WEEK THREE -- CHAPTER 8: QUANTITIES IN CHEMICAL REACTIONS. The purpose of this lab was to determine the mole ratios of the reactants hypochlorite ion (OCI ) and thiosulfate (S O ) when reacted in a chemical reaction. During the actual experiment, we will use centrifugation to separate the enzyme (soluble) from its substrate (insoluble amylose-azure) to stop the reaction. (Laboratory Report 3 covers Experiments 2, 3, and 4) The Final Laboratory Report is a revision of Report 3, and thus also covers Experiments 2, 3, and 4. Chemical Reactions Infinite Campus Update Classifying Chemical Reaction lab (20pts. " Chemical Engineering June 1991: 36-42. Typically, one of the reactants is used up before the other, at which time the reaction stops. Chemical reactions, as macroscopic unit operations, consist of simply a very large number of elementary reactions, where a single molecule reacts with another molecule. Answer questions similar to pre-lab 1-2, post-lab 1-2. Stoichiometry Lab CHEMICAL REACTIONS OF COPPER AND PERCENT YIELD PRE-LAB QUESTIONS Before beginning this experiment in the laboratory, you should be able to answer the following questions: 1. Lab Reports: 60% of the grade (A copy of the format is included in the syllabus. Learn more about Science. The chemical reaction is written as follows: 2Mg + O 2 2MgO. Write and balance chemical reactions simple chemical reactions. Creating a balanced equation rather than using a skeletal equation is very important, as the number of. To write a chemical reaction. Enthalpy of Chemical Reactions. 001 grams of copper (II) sulfate, CuSO4, and 2. You will be graded on your accuracy. " Chemical Engineering June 1991: 36-42. Chemical quantities in reactions Th. Chemical structures and mechanisms should be hand written showing all the relevant atoms and bonds, particularly at the reactions sites. The table below lists most of the information that is needed for a formal written report. 7th Introduction to a Chemical Analysis Laboratory: Good Laboratory Practices, Data Analysis, Technical Reports and Full Lab Reports. It will be returned to you. Students can work in pairs or small groups to accommodate different class sizes. Topic: Chemistry : Chemical quantities and aqueous reactions: precipitation reactions, acid-base and gas-evolution reactions, and oxidation-reduction reactions. Chemical reactions Week 8. Chapter Seven 7. A chemical reaction takes place when new products are made from the reactants. When applying Hess' law, it is important to establish a convention for the. Full text of "Magnesium Chloride Lab" See other formats Magnesium Chloride Empirical Formula Calculation Lab Michael Bradley Table of Contents Definitions 1 Introduction 2 Law of Conservation of Matter 2 Single Replacement Reactions 2 Empirical Formula 2 The Mole and Molar Mass 3 Mathematics Used in this Lab 3 Purpose 5 Materials 6 Procedure 7 Lab Results 8 Key Values: Values Collected and. Manipulate the ratio of reactants to investigate the stoichiometric relationship of a chemical reaction. The purpose of this lab was to determine the mole ratios of the reactants hypochlorite ion (OCI ) and thiosulfate (S O ) when reacted in a chemical reaction. No other report is needed. Solid zinc sulfide reacts with oxygen in the air. From the solubility information at various temperatures, a variety of other thermodynamic quantities can be determined for the system. b) Safety Goggles. The entire write-up is done in the notebook. The reaction which you used to prepare the salt in this experiment should have proceeded to completion. Photosynthesis Lab Report the light energy into the chemical energy of ATP and NADPH, which aid in the next stage, the Calvin Cycle or light independent reactions. View Lab Report - Informal Lab 8. ) These incidents occurred because people weren’t paying attention to quantities. The report should be printed or typed double-spaced. 51g NOTE: • Al is in the form of aluminium foil • CuCl2(aq) is light blue in colour • Cu(s) is deposited as a spongy brown precipitate. Another Problem - Hess’s Law Acetylene torches are used in welding. Either will turn your skin yellow on contact. Wear chemical splash goggles, chemical resistant gloves and a chemical resistant lab coat. The report addresses: (1) the quantities of chemical substances manufactured, imported, used as a reactant, used in industry and consumer products, or lost to the environment; and (2) worker exposure. #N#More recently (1984), the Geneva Conference on Weights and Measures has defined the meter as the distance light travels, in a vacuum, in 1/299,792,458 seconds with time measured by a cesium-133 atomic clock which emits pulses of radiation at very rapid, regular intervals. Chemical Reactions Science and Engineering Practices: Constructing Explanations and Designing Solutions Target Grade Level: 9-12 Apply scientific principles and evidence to provide an explanation about the effects of changing temperature or concentration of the reacting particles Students visit lab stations with on the rate at which a reaction. 2: Metal ion estimation: Quantitative estimation of copper (II), calcium (II) and chloride in a mixture Expt. In this lab, we will be using a chemical test to determine the purity of the aspirin you synthesized. 3 - Limiting and Excess Quantities; 7. A method of measuring turbidity that is created during a chemical reaction between antigen and antibody. Non-italicized font represents additional information included to support the teacher’s understanding of the content being introduced within the CELL. Your pre-lab preparation for each experiment will be. th A calculator is required for tests. What is a hydrocarbon? 2. REACTIONS AND MECHANISMS Show balanced overall reactions. Specifically, we will investigate stoichiometry, the relationship between quantities of materials in chemical reactions. Chemical and Physical Changes Purpose: Identify changes as chemical or physical. Unit 4: Chemical Quantities and Aqueous Reactions Pre- and Post-Lesson Vocab/Questions/Review Unit 4 Terms and Test Prep Questions Conversions (T-Chart) Review / Organizer Conversions Graphical Organizer. A functional groupis a. Chemistry Lab Report (Copper Cycle) This is a lab report for my General Chemistry class. Postma; 7th Edition; ISBN: 1-4292-1954-8 Helpful guides for lab report and notebook preparation can be found under the "Laboratory Resources" tab. Place a small piece of paper on the electronic balance. Write balanced equations for the two reactions you will perform in this lab. Although orders of reaction can be any value, for this lab we will be looking only for integer values for the orders of reaction (0, 1, 2 are acceptable but not 0. We have adapted the vitamin C clock reaction to a student laboratory experiment in which the orders with respect to peroxide and iodide, the rate constant, and the activation energy are determined by the method of initial rates. Counting Atoms [Text Pilot] Balancing Plix [Due the first 2 to get the idea of limiting reagants and excess reagants using chemicals and hot dogs. Then write net ionic equations for each. log[H2O2]i and obtaining the slope of the line using Excel with the data. To create an experiment and report results. The color change occurs when I2 reacts with starch to form a dark blue iodine/starch complex. CO 2: 3/16 moles were produced by the reaction. Pre-laboratory Assignment: Mole Ratios and Reaction Stoichiometry. Flask 1 and 2 are limited by smaller quantities of Mg. March 11th, 2013. There are mainly 5 factors that affect rate of reaction : 1. 2: Metal ion estimation: Quantitative estimation of copper (II), calcium (II) and chloride in a mixture Expt. Materials Needed: "Patriotic Colors" Lab Kit straw Chemicals Needed: Common household chemicals (various). One lab report grade is dropped. Only 3/16 moles were used in the reaction. CHEM 108 CD1 May 02, 2019 Experiment 8: Quantities in. Out main interest here is to apply the first law of thermodynamics to chemical reactions carried out under certain conditions. 2 mL (2 - 3 drops) of liquid material (NOT MORE) should be used. Limiting Reactant Lab: Discuss the concepts of limiting reactant, theoretical yield and percent yield. doc: File Size: 49 kb:. UNIT 3- Energy Changes and Rates of Reaction. Gas Forming Reactions typically go to completion because one or more of the products are removed from the reaction vessel via the formation of a gas, which leaves the reaction mixture as bubbles. The only blog of its kind, it serves as a unique forum for exchange and discussion of lab and plant safety and accident information. Chemistry lab: 1 page report, double spaced, times roman, 12 inch font, with at least 2 or more references. 5 - Molarity; 7. Chemical reactions Th. Pablo Gonzalez 2,953 views. Enzymes accelerate the velocity of virtually all reactions that occur in biological systems including those involved in breakdown, synthesis and chemical transfers. (Mini Report) Lesson 3. The laboratory and lecture are separate courses and you will be assigned a separate grade for each. This chemical will serve as a catalyst for the reaction that will take place. Summers, D. Don) pg 12 Demo: Measure out 1 mole of NaCl, H2O, NaHCO3 and show to your teacher In class Podcast 3. Equations that are not properly balanced are not correct chemical equations, even if they posses the correct elements and quantities. A by-product of many chemical reactions that take place in living cells is hydrogen peroxide (H 2 O 2). Chemical Reactions and Quantities. The change in each quantity must be in agreement with the reaction stoichiometry. Abstract For Enzyme Lab Report. Safety glasses are required, and they are sold at the bookstore. which signals the end point (1). ) These incidents occurred because people weren’t paying attention to quantities. For the generic reaction: 2 A + B → C we would measure the rate by measuring either the increase. Classifying Chemical Reactions Lab video Products (Reg) Reaction Rates Student Inquiry Lab Report Organizer (With lines) Reaction Rates Student Inquiry Lab Report Organizer (NO Lines) Reaction Rates Student Inquiry Lab Rubric. By dissolving Borax into distilled (DI) water at two different temperatures, the amount of borate that went into the solution at each temperature can be measured. The breaking of ANY chemical bond requires the input of energy which has been assigned a “+” value. Title = Reaction Summary For an organic reaction, there is no point in having a Worded Title: The chemical reaction is the best title summary of what you did!. It may be necessary to add water to the mixture to complete the reaction. To identify the products formed in these reactions and summarize the chemical changes in terms of balanced chemical equations and net ionic equations. Experiment 8: Chemical Moles: Converting Baking Soda to Table Salt What is the purpose of this lab? We want to develop a model that shows in a simple way the relationship between the amounts of reactants used in a chemical reaction, and the amounts of products formed. Grade your lab report using this rubric. you worked with a partner by writing “Lab Partner (fill in name). Italicized font represents information to be shared orally or physically completed with the students at this time. PCR is a method used for the amplification of a specific DNA segment such that only small quantities are required for analysis. ii ENVIRONMENTAL CHEMICAL ANALYSIS 2018 LAB SCHEDULE CHEMISTRY 311 Sept. Students will observe the temperature change in a chemical reaction. A lab-based “intelligent microsystem” employs machine learning for modeling chemical reactions, researchers report. 18 Homework-S'mores Thought Lab Lesson 3. A metal was found in the lab that was missing its labeling tape. Redox: Zn + MgCl 2 ZnCl 2 + Mg Zinc is oxidized: 0Zn - Zn2+ + 2e Magnesium is reduced: Mg2+ + 2e- Mg0 Metathesis: Pb(NO 3) 2 + H 2. Excellent - lab report contains all necessary parts, well developed, in complete. Fun and Interesting Chemistry Facts. AP Chemistry Lab #9 Page 1 of 7. The reactions will either give off heat or absorb heat. To create an experiment and report results. Test-tube quantities of hazardous liquids can be flushed down the sink with plenty of water. Italicized font represents information to be shared orally or physically completed with the students at this time. 2 Chapter 17-18 Review Guide Rate of. It produced the most gas because a chemical reaction occurred. percent yield of the reaction determined. Laboratory 1: Chemical Equilibrium 1 Reading: Olmstead and Williams, Chemistry , Chapter 14 (all sections) Purpose: The shift in equilibrium position of a chemical reaction with applied stress is determined. Chemical kinetics determines the overall order of the reaction, as well as the order of each of the reactants. There are mainly 5 factors that affect rate of reaction : 1. Reactions in your experiment based on ideas of double exchange, combination. Balance chemical equations. Scheme 1: Overall Reaction Scheme of Friedel-Crafts Alkylation The Friedel-Crafts Alkylation that was performed in lab involved the reaction of. 2 Water dissociates into ions in the order of 1x10-7 M [H+] and 1x10-7 M [OH-]. Irish Branch Use applied for: Industrial use as a reaction medium and a solvating agent in mediating subsequent chemical transformation reactions leading to. PhET sims are based on extensive education research and engage students through an intuitive, game-like environment where students learn through exploration and discovery. Chemical quantities in reactions Week 9. These ions produce a minimum of two cations and two anions on. Unit at a Glance. You will be handing in the lab report but answering only the questions about the ester you created. Reaction \ref{3}: Reaction \ref{4}: Your goal in this lab is to experimentally verify the mole-to-mole ratios between a certain reactant and a certain product in both reactions. The reaction that occurs as the propellant burns is: 3NH 4ClO 4(s) + 3Al (s)--- > Al 2O 3. Reaction #1: Add a small scoopful of sodium chloride (NaCl) into an evaporating dish. The product of the reaction between copper and Nitric acid in Step 2 was placed on ice because during the exothermic reaction, large quantities of heat were released. To do colorimetry, you need a colorimeter, which starts in the hundreds of dollars for a lab-grade tool. This is equivalent with 5. Based on their observations, students are then asked to explain the evidence that a chemical reaction occurred. It is this reaction that will be studied in this experiment. Overall Format is correct – information is in the correct section (includes name, dates performed, and roles and names of all group members) 3 2 1 0 Spelling and grammar are correct 4 3 2 1 0 Experiment performed safely with Pre-Lab submitted on-time 4 3 2 1 0 Abstract Problem(s) clearly stated and relevant to overall experiment 5 4 3 2 0. SWBAT prove that even though new substances are produced during chemical reactions, the beginning and ending mass/weight stays the same. A reaction was observed when placed in a solution of zinc chloride. Chemical Equilibrium- Distinguish between reactions that go to completion and those that are reversible. When a chemical reaction takes place in a container which prevents the entry or escape of any of the substances involved in the reaction, the quantities of these components change as some are consumed and others are formed. All of the O 2 was consumed during the reaction. When Sodium borate octahydrate (Borax) dissociates in water it forms two sodium ions, one borate ion and eight water molecules. The standard enthalpy change ΔH° is expressed in kJ/mol. The synthesis of plastic precursors, such as polymers, involves specialized. What is a hydrocarbon? 2. you worked with a partner by writing “Lab Partner (fill in name). View our newest products for your classroom. Purpose - The purpose of this lab was to observe different type of chemical reactions to write and balance chemical equations. University. PS-7 investigate factors (e. This appendix presents Laboratory Chemical Safety Summaries (LCSSs) for 88 substances commonly encountered in laboratories. Chemical reactions Th. The reaction series includes single replacement, double replacement, synthesis, and decomposition reactions. You may wish to demonstrate how to mix chemicals, apply heat, or add indicator. Why Do Helium Balloons Deflate? What Is an Experiment? Definition and Design. To incorporate graphing exercises involving mass relationships into lab activities, three simple actions are necessary: (i) the quantities of reactants that have undergone transformation and the quantities of products that have been made as a result of the chemical reaction must be directly measured or deduced, (ii) the mass of reactants in the. docx from CHEM 108 at Community College of Baltimore County. Peer-Review Checklist for Rocket Lab. CH104 Quantitative Analysis (Lab) Introduction In this course you will learn basic volumetric and gravimetric methods of chemical analysis. It is so big that only an elephant could use toothpaste this large. It is very important for a chemist to understand the conditions that affect the rate of a chemical reaction. chemical reactions; 7. Determine the heat given off by the reaction from the temperature change. 1) All chemical reactions are chemical changes. Changes in reaction conditions, including concentrations of reactant(s), temperature, and pH, can all influence the rate with which the reactants are consumed and products are formed. Chemistry lab: 1 page report, double spaced, times roman, 12 inch font, with at least 2 or more references. Chemical names, formulas, quantities, and reactions 4. When chemical reactions occur, the reactants undergo change to create the products. Access precise, dependable, and timely information on synthetic organic research, including organometallics, total syntheses of natural products and biotransformation reactions. (ex: AC +BD AD + BC). Decomposition chemical reaction is the reaction where only one compound decomposes and results in two or more than two products. A species can contain more than one chemical element (HCl, for example, contains hydrogen and chlorine). Students will be able to observe chemical changes: one in which a gas is produced and a change in temperature occurs, and one in which the substance is visibly changed. Alkalinity and Maillard reactions – interactions between carbohydrates and amino acids. ] Chemical Quantities Lab: 1/10/20: 1/9/20 ~Guest Teacher~ Chemical Quantities Lab. Through experimenting with quantities of reactants and temperature, students are lead to write claims about how these variables affect the rate of the reaction. It dissolves only slightly in water and will evaporate to air. 1/P12: Predicting Chemical Reactions (5. Use lab data to calculate molar mass of an unknown metal. The basis of this experiment is to determine the heat of reaction from the temperature dependence of equilibrium constant. Titration, process of chemical analysis in which the quantity of some constituent of a sample is determined by the gradual addition to the measured sample of an exactly known quantity of another substance with which the desired constituent reacts in a definite, known proportion. You will do a double mixed-aldol condensation reaction between acetone and benzaldehyde. Flask 4 will produce only the same amount of hydrogen as Flask 3 and have excess Mg left over, since the reaction is limited by the HCl. Hi, I really need help with my lab report. Chemists spend a great deal of time tweaking the conditions of known reactions. CH104 Quantitative Analysis (Lab) Introduction In this course you will learn basic volumetric and gravimetric methods of chemical analysis. The carbon dioxide gas was produced through the chemical reaction in the bag. Solutions 1week 12. Reaction Equation Type Of Reaction 1. Chemical Reactions and Quantities. • 90 Questions in all • Questions are editable • ANSWERS are included for every problem. Light the Bunsen Burner 2. From the benzoic acid runs, use the known D comb H for benzoic acid to determine the calorimeter constant. 22nd and Dec. Chemists use the mole unit to represent 6. Need to report the video? Chemical Reaction YouTube; Chapter 8 - Quantities in Chemical Reactions - Duration: 57:08. A spontaneous reaction may involve an increase or decrease in enthalpy, it may. Purpose - The purpose of this lab was to observe different type of chemical reactions to write and balance chemical equations. It also can be interpreted as: 4 molesof NH3reacts with 5 molesof O 2to produce 4 molesof NO and 6 molesof. Chemical reactions Week 8. 1 shows a section of what is commonly called the Activity Series where metals are placed in order of increasing. Stoichiometry Lab CHEMICAL REACTIONS OF COPPER AND PERCENT YIELD PRE-LAB QUESTIONS Before beginning this experiment in the laboratory, you should be able to answer the following questions: 1. When the copper has dissolved, add seven drops of distilled water to the tube. Chemical Reactions of Copper Lab By Natalie Dickman and Nathan Yoo Conclusion Data The objective of this lab was to fully carry out five reactions of copper, and to observe and understand the methods behind each reaction. Create and analyze Excel-based graphs of experimental data. Lab Report On Iron Stoichiometry 1098 Words | 5 Pages. Several factors affect the rates of reactions. Items 1-10 will serve as your pre-lab notebook, and as usual, they MUST be completed BEFORE coming. The calculations assume that all heat released in each chemical reaction goes. See lab details below. Once you add them together, you get green then yellow. list of all materials used, including sizes (e. A good way to think about a chemical reaction is the process of baking cookies. Chemical Reactions by Joanna Bruno Adapted from "Chemical Reactions" by Jen Varrella, Gena D. The question is a little vague so I will assume you are looking for the purpose of a basic general chemistry laboratory where various chemical solutions are mixed together in test tubes to see. the equilibrium between reagents and the products is achieved. Chemical equilibrium deals with to what extent a chemical reaction proceeds. You won't turn it in yet, but your TA will. For example, in the titration of hydrochloric acid (HCl) with a base such as sodium hydroxide (NaOH), the chemical reaction between these two species would have to be known. Dealing with quantities is very important in chemistry, as is relating quantities to each other. During the actual experiment, we will use centrifugation to separate the enzyme (soluble) from its substrate (insoluble amylose-azure) to stop the reaction. Answer In-Lab Questions #1 and #2 on page E4C-5. lab report precipitating calcium carbonate name:, section: 7039 partner’s name: instructor: illya nayshevsky date of experiment: 11/07/2016 experiment. Parent Information. Electrical work done by an electrochemical cell is therefore defined as. Observe and interpret chemical reactions. Calculated amounts of products are called theoretical yield. Example Lab Report for Organic Labs Actual lab report is in black; notes about how to write reports are in red. Science Lab 6-2: Factors Affecting Reaction Rate Effect of Surface Area in the rate of a chemical reaction. Ionic and covalent bonds, reaction rates and equilibrium, behavior of gases 5. Equations that are not properly balanced are not correct chemical equations, even if they posses the correct elements and quantities. LAB- Types of Chemical Reactions (*Lab Handout) - FInish Lab, "Types of Chemical Reactions" (on loose-leaf paper): Observations (make table) & Conclusion for each section, A - G*DUE by end of class tomorrow QUIZ- Types of Chemical Reactions = Mon, Oct. 6 - Titrations; Unit 7: Lab Report. Chemical reactions are normally represented by balanced chemical equations. To find the actual rate we plot a graph of volume of hydrogen (cm3) against time (seconds). Chemical Reactions Lab Objectives: 1. Reaction Kinetics: The Iodine Clock Reaction Introduction The "clock reaction" is a reaction famous for its dramatic colorless-to-blue color change, and is often used in chemistry courses to explore the rate at which reactions take place. It also can be interpreted as: 4 molesof NH3reacts with 5 molesof O 2to produce 4 molesof NO and 6 molesof H2O. Chemical quantities in reactions Week 9. Lecture: Going over 8. Light the Bunsen Burner 2. 1- The Iodine Clock Reaction Introduction In this experiment, you will study a reaction that proceeds at an easily measured rate at room temperature: S 2O 8 2-+ 2I-2SO 4 2-+ I 2 persulfate iodide sulfate iodine In the first part of the experiment, the rate equation will be determined by investigating. Scenario-Based Activity. Use these steps if you are applying stoichiometry to this experiment. Standard Synthesis Laboratory Report Format: The following layout is standard for a "synthesis reaction" report. No food or drink is allowed in lab unless food or drinks are provided as a part of the lab. Topic: Chemistry : Chemical quantities and aqueous reactions: precipitation reactions, acid-base and gas-evolution reactions, and oxidation-reduction reactions. 4 kJ/mol When one mole of sodium hydroxide is dissolved in water, the reaction (the system) releases 44. The new AP Chemistry Course and Exam Description has identified Learning Objectives which need to be taught and practiced to ensure students perform well on the AP Chemistry Exam. Using the top-loading balance, pre-weigh about 2. The chemical reaction that occurs between the acid and the base allows one to calculate the initial concentration (or amount) of the acid. Instituto Cristiano Bilingüe Sunshine Meggan E. 2Al(s) + 3CuCl2(aq) 3Cu(s) + 2AlCl3(aq) (equation 1) 0. They do not need to be copied. Lyiaba Mian Professor C. Objectives reactions lab coat or starting substance forms. Start studying Ch. 2Al(s) + 3CuCl2(aq) 3Cu(s) + 2AlCl3(aq) (equation 1) 0. It is believed that the process was known (metallurgy) as early as 4500 BC. Write the balanced chemical equation for the complete combustion of ethanol, C2H5OH(l). In order to determine the identity of the metal several reactions were performed. Monosaccharides are measured using the various methods described previously. The old old 85-99 elite old 100 or older fastest growing subgroup?? is the old old, sometimes referred to as the. Lab Course Outcomes. 10H 2 O in water. quantities of vitamins and minerals are required, a deficiency can have devastating effects. Answer questions similar to pre-lab 1-2, post-lab 1-2. There is no separate lab report. 9-10: I can use the metric units to measure quantities section 1-1 problems. Electrical work. The distance half way around the edge of the circle will be 3. Chemistry lab: 1 page report, double spaced, times roman, 12 inch font, with at least 2 or more references. Introduce heat, water, vinegar, and iodine. The reaction in Part C is a neutralization. Lab Manual & Supplies "Chemistry in the Laboratory", by J. Chapter 8 - Quantities in Chemical Reactions - Duration: 57:08. 0 M acetone and water using a 10. Powered by Create your own unique website with customizable templates. View Lab Report - Informal Lab 8. The distance half way around the edge of the circle will be 3. They are necessary for normal growth and metabolism. Introduction: Discuss briefly the major goals of the lab. If you have a dozen carrots, you have twelve of them. Copper-Iron Stoichiometry Lab Report 10/3/12 Abstract: The lab performed required the use of quantitative and analytical analysis along with limiting reagent analysis. Typically, one of the reactants is used up before the other, at which time the reaction stops. This is because when we carry out the experiment in the laboratory, we usually. The standard enthalpy change ΔH° is expressed in kJ/mol. Pultz and J. When heated, magnesium reacts readily with oxygen in the air, to produce magnesium oxide. The written lab report is worth a total of 50 points (your lab notebook scores for the entire semester are worth 100 points), so your written lab report is a big part of your lab grade. Don) pg 12 Demo: Measure out 1 mole of NaCl, H2O, NaHCO3 and show to your teacher In class Podcast 3. Using a balanced chemical equation to calculate amounts of reactants and products is called stoichiometry. where n is the stoichiometric number of electrons passing through the cell circuit, a unitless quantity determined from cell half-reactions; F is Faraday's constant, which is the charge of a mole of electrons and equals. Presence of catalyst 4. The coefficients in a. 1 g of product should be prepared in a single run. In simple terms, one substance gains oxygen (or loses hydrogen) and another loses oxygen (or gains hydrogen). Minimize the potential for accidents: Design chemicals and their forms (solid, liquid, or gas) to minimize the potential for chemical accidents including explosions, fires, and releases to the environment. Introduction. I'm not sure about a chemical reaction, but a tesla coil produces a decent amount of ozone, you can even use it to decontaminate water (lots of water treatment is done with ozone). The Grignard Reaction: A Microscale Preparation of Benzoic Acid Introduction Your laboratory skills have grown considerably since the first of the semester, and you are ready for the challenge of a famous reaction--one marked by unusual materials and striking chemical and physical changes. Hello, I really don't understand how to find the grams of moles in some of these reactions. The last two conversion factors convert from amount of one substance in a chemical reaction (mL NaOH solution) to amount of another substance in the reaction (mol HNO 3). Abstract This series of reactions (Figure 1) were carried out first using benzil and sodium borohydride to form the hydrobenzoin product, then the hydrobenzoin product was combined with anhydrous acetone and iron trichloride to form the product (4S-5R)-2,2-dimethyl-4,5-diphenyl-1,3-dioxolane. It is a super technical-sounding word that simply means using ratios from the balanced equation. Purpose: To view the actual chemical reactions, write the correct balanced chemical equation, and type of chemical reaction. -Lab Preparation (mini report) Lesson 3. The rate is not a constant throughout the reaction - it changes! 2. [KMnO 4] x:: Permanganate concentration raised to some power x. Power Supply Zn 2. "Winds of change. Light the Bunsen Burner 2. Some combinations will be too slow, the acid is not reactive enough, or the salt produced as the other product is insoluble and precipitates out, covering the. Carbon dioxide will be generated. For additional details on what specific results may mean, see the sections below on: Visual examination; Chemical examination; Microscopic examination To see an example of a urinalysis lab report, see this sample report. doc: File Size: 49 kb:. percent yield of the reaction determined. Plan: We are asked to calculate the amount of product, given the amounts of two reactants, so this is a limiting reactant problem. Purpose The reaction of iron (III), Fe 3+, with thiocyanate, SCN–, to yield the colored product, iron (III). Counting Atoms [Text Pilot] Balancing Plix [Due the first 2 to get the idea of limiting reagants and excess reagants using chemicals and hot dogs. Lecture: Part III 8. • Always wear safety glasses or goggles at all times in the laboratory. The amount that forms is determined by the molar ratio between these two chemicals as seen in the chemical equation given above. The table below lists most of the information that is needed for a formal written report. Hydrogen sulfide, $$\ce{H2S}$$, is formed by the direct combination of an acid (source of $$\ce{H^{+}}$$) and. Dibenzalacetone by Aldol Condensation 45 ALDOL SYNTHESIS of DIBENZALACETONE, AN ORGANIC ( SCREEN Overview: The reaction of an aldehyde with a ketone employing sodium hydroxide as the base is an example of a mixed aldol condensation reaction. These form complex shapes. Lab Instructor:_____ PREPARATION FOR CHEMISTRY LAB: COMBUSTION 1. To demonstrate reversible reactions. In the study of chemical reactions, chemistry students first study reactions that go to completion. When handling highly reactive chemicals, use the smallest quantities needed for the experiment. The mole will allow us to study in greater detail chemical formulas and chemical reactions. They are necessary for normal growth and metabolism. Chemical quantities in reactions Week 9. Changes in physical properties. Reaction of vinegar with bicarbonate of soda. Jacquie Richardson CHEM 3321-100 1/1/2000 The introduction, physical data, and procedure make up the prelab, which you need to type up (not handwrite) before the experiment and bring with you. Flask 1 and 2 are limited by smaller quantities of Mg. Chemical Reaction. 15: Fri, Oct. 1 Average Reaction Rates  Activation Energy Worksheet  Factors Affecting Reaction Rates SG 17. This lab will explore double-replacement reactions, the combination of atoms/ions reactants that form completely different products. Science Lab 6-2: Factors Affecting Reaction Rate Effect of Surface Area in the rate of a chemical reaction. For known amounts of reactants, theoretical amounts of products can be calculated in a chemical reaction or process. Dibenzalacetone by Aldol Condensation 45 ALDOL SYNTHESIS of DIBENZALACETONE, AN ORGANIC ( SCREEN Overview: The reaction of an aldehyde with a ketone employing sodium hydroxide as the base is an example of a mixed aldol condensation reaction. Carefully place 0. PS-5 investigate chemical reactions and the release or consumption of energy. Begin learning about matter and building blocks of life with these study guides, lab experiments, and example problems. What actually happens is this: the acetic acid (that's what makes vinegar sour) reacts with sodium bicarbonate (a compound that's in baking soda) to. Unit 2- Matter and its Properties 2nd TRIMESTER. Questions 1. This report describes experiments aimed at determining the concentration of chloride in a solid sample. The last reaction, the decomposition of potassium chlorate, includes manganese(IV) oxide (MnO2) as a catalyst. The reaction of the antigen that is present in the person's sample to the specific antibody is compared with reactions of known concentrations and the amount of antigen is reported. Heats of Reaction When reactants come together in a chemical reaction to form products, chemical bonds are broken (in the reactants) and formed (in the products). That behavior is a chemical property of iron because it involves how iron can undergo a chemical change. Determine the limiting reactant in a chemical reaction. For example, when some amino acids or fatty acids are broken down into other useful molecules, hydrogen peroxide is produced as a by-product. Within two lab reports we focused on factors affecting reaction rate with enzymes. Contact lab technician for disposal of large. Flask 1 and 2 are limited by smaller quantities of Mg. Chem Lab 5. Reaction \ref{3}: Reaction \ref{4}: Your goal in this lab is to experimentally verify the mole-to-mole ratios between a certain reactant and a certain product in both reactions. The reaction is fastest at the start, gradually becoming slower as the reaction proceeds. Today's Gold Prices. (The patient ultimately survived. We learn ways of representing molecules and how molecules react. 21 Homework-Begin % Yield Lab Report (Mini Report). Materials Needed: "Patriotic Colors" Lab Kit straw Chemicals Needed: Common household chemicals (various). Laboratory 1: Chemical Equilibrium 1 Reading: Olmstead and Williams, Chemistry , Chapter 14 (all sections) Purpose: The shift in equilibrium position of a chemical reaction with applied stress is determined. Combustion is an exothermic reaction. Based on their observations, students are then asked to explain the evidence that a chemical reaction occurred. Classifying Chemical Reactions Lab video Products (Reg) Reaction Rates Student Inquiry Lab Report Organizer (With lines) Reaction Rates Student Inquiry Lab Report Organizer (NO Lines) Reaction Rates Student Inquiry Lab Rubric. Isolation of a substance from animal or plant matter is another application of extraction, either to obtaining the compound for some end use (e. CHEM 108 CD1 May 02, 2019 Experiment 8: Quantities in. Determine the limiting reactant in a chemical reaction. We have adapted the vitamin C clock reaction to a student laboratory experiment in which the orders with respect to peroxide and iodide, the rate constant, and the activation energy are determined by the method of initial rates. Whether and How Authentic Contexts Using a Virtual Chemistry Lab Support Learning. Substance Name: 1,2-dichloroethane (Ethylene Dichloride, EDC) EC Number: 203-458-1 CAS Number: 107-06-2 Applicant: Eli Lilly S. Unit 1- Physical Quantities. Flask 4 will produce only the same amount of hydrogen as Flask 3 and have excess Mg left over, since the reaction is limited by the HCl. time taken by a reaction to reach equilibrium. UNIT TEST- Quantities in Chemical Reactions:. An "indicator" is a chemical that changes color as conditions in the solution change. Make, record, and report experimental observations. Brass & Copper. A solution of lead (II) nitrate is mixed with a solution of sodium iodide. In this lab, a series of reactions, with slight variations in concentrations were completed so the overall order of the reaction could be determined. ] Chemical Quantities Lab: 1/10/20: 1/9/20 ~Guest Teacher~ Chemical Quantities Lab. As the bottle heats up in this exothermic reaction, the water. Food spoiling e. Lab 6: CHEMICAL KINETICS TO DYE FOR Laboratory Goals chemical reactions. 7 5 4 6 9/18 9/19 Gravimetric Analysis of Hard Water 4. Surface area of mixture molecules For a reaction to occur, according to the Collision Theory, the collision. 72 EXPERIMENT 6: CHEMICAL REACTIONS aluminum foil is placed into an aqueous hydrochloric acid solution, Al replaces the H+ in the HCl, and H+ changes into its free state, H 2 (g). The Rate Law background: For most chemical reactions, changing the concentrations of the reactants will change the rate of the reaction. Stoichiometry: Stoichiometry is the study of the relationship between the relative quantities of substances taking part in a reaction or forming a compound. Enzymes accelerate the velocity of virtually all reactions that occur in biological systems including those involved in breakdown, synthesis and chemical transfers. Balance chemical equations. Arthur's Science Page. The US Environmental Protection Agency (EPA) and the Occupational Safety and Health Administration (OSHA) have investigated recent accidents at petroleum refineries, chemical manufacturing facilities, tolling operations, chemical distributors, and other types of facilities. Pour 1 – 2 mL of limewater, Ca(OH)2(aq), into a test tube. 9-10: I can use the metric units to measure quantities section 1-1 problems. To do colorimetry, you need a colorimeter, which starts in the hundreds of dollars for a lab-grade tool. 2Al(s) + 6HCl(aq) 2AlCl 3 (aq) + 3H 2 (g) Table 6. Determination of Mass and Mole Relationship in a Chemical Reaction. KEIO ACADEMY OF NEW YORK CHEMISTRY 2019-2020. We ve got what is a chemical equilibrium. The only blog of its kind, it serves as a unique forum for exchange and discussion of lab and plant safety and accident information. Reaction #1: Add a small scoopful of sodium chloride (NaCl) into an evaporating dish. In reference to a certain chemical element, the atomic mass as shown in the periodic table is the average atomic mass of all the chemical element's stable isotopes. Online writing lab college application essay why us. It is to be turned in along with any assigned questions. A chemical reaction takes place when new products are made from the reactants. A reaction was observed when placed in a solution of zinc chloride. 100% yield) because the reaction may not go to completion because it may be reversible or some of the product may be lost when it is separated from the. (2018, February). The Laboratory Standard defines a laboratory as a “workplace where relatively small quantities of hazardous chemicals are used on a non-production. Lecture: Part III 8. Also we had to be able to analyze and classify the reactions into certain groups. This particular lab report shows my ability to work with quantitative data, and analyze the calculations and measurements from the lab. 1 Presentation: 1. 2 mL (2 - 3 drops) of liquid material (NOT MORE) should be used. When some formulas of the products are not known,. - Calculate equilibrium constant from equilibrium concentrations. 2) New properties appear. 2 Water dissociates into ions in the order of 1x10-7 M [H+] and 1x10-7 M [OH-]. Observations will initially focus on qualitative data and analysis. To incorporate graphing exercises involving mass relationships into lab activities, three simple actions are necessary: (i) the quantities of reactants that have undergone transformation and the quantities of products that have been made as a result of the chemical reaction must be directly measured or deduced, (ii) the mass of reactants in the. Micro Rocket Lab continued 6 2016 inn cientiic nc Rights Resered 2. AP Chemistry is an in-depth, fast-paced second-year chemistry course for advanced, science-oriented students. bo4icdtthv7jw1h, 191gla7hl3h8, d5bj8n2vofb, ix6bfndcvlg6, gu57v4evu3wc, 6wyqcual8qu, w1ssxbzmz7, ajosc4olj0s3c32, v4bvxl0hdfdr28p, e1cv7bpu93, xuwmsbwn49y, 7bqvpwki6d01, bmwhs4t0m6trn, rfowqcl6eysyhah, ziljp5fxespxrxt, 0j2vyb4vx10owk9, 36aa3ncul8is, 63wfue3y7a69, 8jippynrq44ju3, aj8hwgj5kv2, s8h3c4nak6, tpk036tk5tc9, xerj5z6bjxi, 6dnfpurbde, 2664wlwue9u9myr, 1rz2yje9fi, rg4izhl25yzrc1, j4m3j87f6f39ae, vfw22x041j0
2020-05-31 13:19:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44218742847442627, "perplexity": 2657.069756840141}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413406.70/warc/CC-MAIN-20200531120339-20200531150339-00425.warc.gz"}
https://www.baryonbib.org/bib/00fd05af-cc8d-462c-84c3-ead46d9cc493
PREPRINT # The assembly of dusty galaxies at $z\ge 4$: the build-up of stellar mass and its scaling relations with hints from early JWST data C. Di Cesare, L. Graziani, R. Schneider, M. Ginolfi, A. Venditti, P. Santini, L. K. Hunt Submitted on 12 September 2022 ## Abstract The increasing number of distant galaxies observed with ALMA by the ALPINE and REBELS surveys, and the early release observations of the James Webb Space Telescope (JWST) promise to revolutionize our understanding of cosmic star formation and the assembly of normal, dusty galaxies. Here we introduce a new suite of cosmological simulations performed with \texttt{dustyGadget} to interpret high-redshift data. We investigate the comoving star formation history, the stellar mass density and a number of galaxy scaling relations such as the galaxy main sequence (MS), the stellar-to-halo mass and dust-to-stellar mass relations at $z>4$. The predicted star formation rate and total stellar mass density rapidly increase in time with a remarkable agreement with available observations, including recent JWST ERO and DD-ERS data at $z\ge 8$. A well defined galaxy MS is found already at $z<10$ following a non evolving power-law, in agreement with both JWST and REBELS data at the low/high-mass end respectively, and consistent with a star formation efficiently sustained by gas accretion and a specific star formation rate increasing with redshift, as established by recent observations. A population of low-mass galaxies ($\mathrm{L}\mathrm{o}\mathrm{g}\left({\mathrm{M}}_{\star }/{\mathrm{M}}_{\odot }\right)<9$) at $z\le 7$ exceeding present estimates in the stellar mass function is also responsible of a large scatter in the stellar-to-halo and dust-to-stellar mass relations. Future JWST observations will provide invaluable constraints on these low-mass galaxies, helping to shed light on their role in cosmic evolution. ## Preprint Comment: 20 pages, 10 figures, submitted to MNRAS. Comments are welcomed Subject: Astrophysics - Astrophysics of Galaxies
2022-09-27 11:43:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36179789900779724, "perplexity": 3852.1406863383722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00528.warc.gz"}
https://mathematica.stackexchange.com/questions/89012/why-doesnt-import-work-with-xowa-offline-wikipedia-database
# Why doesn't Import work with XOWA offline Wikipedia database? I'm having trouble with importing HTML from XOWA offline wikipedia database. I set up a local server for datamining, but I can't access it with Mathematica. I can request a HTML page with curl or wget, but Mathematica's Import[] crashes the XOWA server with failed to process request; request=<<NULL>> err_msg=[err 0] <gplx> invalid content_type: line=Content-Type: application/x-www-form-urlencoded request=type: GET url: /en.wikipedia.org/wiki/Test protocol: HTTP/1.1 host: localhost:8080 user_agent: Wolfram HTTPClient 10.2 accept: */* accept_encoding: <<NULL>> dnt: false x_requested_with: <<NULL>> cookie: <<NULL>> referer: <<NULL>> content_length: 0 content_type: <<NULL>> content_type_boundary: <<NULL>> connection: <<NULL>> pragma: <<NULL>> cache_control: <<NULL>> [trace]: gplx.core.net.Http_request_parser.Parse_content_type(Unknown Source) gplx.core.net.Http_request_parser.Parse(Unknown Source) gplx.xowa.servers.http.Http_server_wkr_v2.Run(Unknown Source) gplx.xowa.servers.http.Http_server_wkr_v2.Invk(Unknown Source) I'm using ImportString[Import["!curl \"http://localhost:8080/en.wikipedia.org/wiki/"<>link<>"\"","Text"],"XMLObject"] at the moment but I'm interested in fixing the problem with direct importing. Unfortunately I don't understand the Java errors so I can't debug this myself. This is an issue with XOWA. The HTTP Server was rewritten in v2.7.2 to handle POSTs and other features. However, it looks like it crashes on your request. I'll look at fixing this for v2.8.2. I'll comment again here when I have a resolution, but feel free to contact me directly for more info. Hope this helps! [Edit: This was fixed for v2.8.2. XOWA now accepts GET requests with a Content-Type. See https://github.com/gnosygnu/xowa/releases ] • Thanks for the notice, I look forward to using the fixed server. – shrx Aug 3 '15 at 7:38 • I have a fix ready for this. I created a ticket here with more info: github.com/gnosygnu/xowa/issues/17 . Basically, XOWA became more strict about the request headers in v2.7.2 and rejected GET requests with headers of Content-Type. curl and get probably work b/c they don't submit Content-Type headers, but Mathetmatica fails. This has been relaxed for v2.8.2 which will be released on Monday. If you want a version before then, let me know what OS and bitness, and I'll make a temporary build for you. Aug 5 '15 at 3:29 • Also, are you planning to mine all the [itex] statements in English Wikipedia? If so, I can probably put together a sqlite database for you with these columns: page_namespace, page_name, math_source. This will take me 4 days to generate, but it will probably be faster than running it thru the HTTP server. I can also provide instructions as well if that's where you're headed. Let me know if you have any questions. And thanks again for reporting the issue! Aug 5 '15 at 3:34 • I am actually interested in creating article link connectivity graphs, inspired by Wikipedia:Getting to Philosophy but also for other common words besides "philosophy". I wrote a pretty robust parser that handles a couple of thousand articles, which I more or less thoroughly checked "by hand", without fail. The only issue is the speed, it can process only 100 - 200 articles per minute. Most of the time is spent requesting the article from the server, processing it to an XMLObject and then parsing it for the correct link.[cont.] – shrx Aug 5 '15 at 6:41 • [cont.] So I would appreciate if there was a way to create a database containing article's title and its (namespace 0) links (titles, not IDs), however I would still somehow have to check for each link if it satisfies the rules - not italic, not in parentheses, etc. Let me know if it's possible. And also, in the meantime before you push the updated XOWA version, I guess I can just get around this issue if I specify an empty Content-type header? – shrx Aug 5 '15 at 6:44 As @gnosygnu mentioned in the comments of his answer, the problem is with the "Content-Type" header. So as a temporary fix before XOWA is updated, this works: ImportString[URLFetch["http://localhost:8080/en.wikipedia.org/wiki/Test",
2021-09-22 18:41:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3568039834499359, "perplexity": 3725.1335048833903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057371.69/warc/CC-MAIN-20210922163121-20210922193121-00065.warc.gz"}
https://brilliant.org/problems/fast-moving-wires/
# Fast-moving wires Consider two long parallel wires with linear charge density $$\lambda$$ (charge per unit length) separated by a distance d. The wires move in the same direction (parallel to the wire direction) at a constant speed v. For what value v in meters per second the magnetic attraction balances the electrical repulsion? You do not need special relativity in order to solve the problem correctly. Note that even though the charge density is frame dependent, the choice of the frame does not affect the answer. Details and assumptions $\epsilon_{0}=8.85\times 10^{-12} F/m$ $\frac{\mu_{0}}{4\pi}= 10^{-7} H/m$ ×
2017-08-18 20:00:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6379622220993042, "perplexity": 285.5775987484431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105108.31/warc/CC-MAIN-20170818194744-20170818214744-00712.warc.gz"}
http://www.theoryofcomputing.org/categories/algorithms.html
Articles under category: Algorithms ToC Library Graduate Surveys 5 (2013) 60 pages Fast Matrix Multiplication Vol 10, Article 13 (pp 341-358) [APRX-RND12 Spec Issue] Approximation Algorithm for Non-Boolean Max-$k$-CSP Vol 10, Article 12 (pp 297-339) Width-Parametrized SAT: Time--Space Tradeoffs Vol 10, Article 11 (pp 257-295) Efficient Rounding for the Noncommutative Grothendieck Inequality by Assaf Naor, Oded Regev, and Thomas Vidick Vol 10, Article 10 (pp 237-256) Lower Bounds for the Average and Smoothed Number of Pareto-Optima Vol 9, Article 30 (pp 897-945) Why Simple Hash Functions Work: Exploiting the Entropy in a Data Stream Vol 9, Article 19 (pp 617-651) Complete Convergence of Message Passing Algorithms for Some Satisfiability Problems by Uriel Feige, Elchanan Mossel, and Dan Vilenchik Vol 8, Article 26 (pp 597-622) A Constant-Factor Approximation Algorithm for Co-clustering Vol 8, Article 25 (pp 567-595) [Motwani Special Issue] Online Graph Edge-Coloring in the Random-Order Arrival Model Vol 8, Article 24 (pp 533-565) Solving Packing Integer Programs via Randomized Rounding with Alterations Vol 8, Article 20 (pp 429-460) [Motwani Special Issue] Budget-Constrained Auctions with Heterogeneous Items Vol 8, Article 19 (pp 415-428) Distance Transforms of Sampled Functions Vol 8, Article 18 (pp 401-413) [Motwani Special Issue] An $O(k^3\log n)$-Approximation Algorithm for Vertex-Connectivity Survivable Network Design by Julia Chuzhoy and Sanjeev Khanna Vol 8, Article 15 (pp 351-368) [Motwani Special Issue] One Tree Suffices: A Simultaneous $O(1)$-Approximation for Single-Sink Buy-at-Bulk by Ashish Goel and Ian Post Vol 8, Article 14 (pp 321-350) [Motwani Special Issue] Approximate Nearest Neighbor: Towards Removing the Curse of Dimensionality Vol 8, Article 9 (pp 209-229) [Motwani Special Issue] Improved Bounds for Speed Scaling in Devices Obeying the Cube-Root Rule by Nikhil Bansal, Ho-Leung Chan, Dmitriy Katz, and Kirk Pruhs Vol 8, Article 7 (pp 165-195) [Motwani Special Issue] Online Scheduling to Minimize Maximum Response Time and Maximum Delay Factor Vol 8, Article 6 (pp 121-164) [RESEARCH SURVEY] The Multiplicative Weights Update Method: a Meta-Algorithm and Applications by Sanjeev Arora, Elad Hazan, and Satyen Kale Vol 8, Article 4 (pp 69-94) [Motwani Special Issue] Regularity Lemmas and Combinatorial Algorithms by Nikhil Bansal and Ryan Williams Vol 7, Article 5 (pp 49-74) Metric Clustering via Consistent Labeling Vol 7, Article 2 (pp 19-25) [NOTE] Inverting a Permutation is as Hard as Unordered Search Vol 6, Article 11 (pp 247-290) The Submodular Welfare Problem with Demand Queries by Uriel Feige and Jan Vondrák Vol 6, Article 8 (pp 179-199) Routing Without Regret: On Convergence to Nash Equilibria of Regret-Minimizing Algorithms in Routing Games by Avrim Blum, Eyal Even-Dar, and Katrina Ligett Vol 6, Article 2 (pp 27-46) Reordering Buffers for General Metric Spaces Vol 5, Article 9 (pp 173-189) All Pairs Bottleneck Paths and Max-Min Matrix Products in Truly Subcubic Time Vol 5, Article 4 (pp 83-117) SDP Gaps and UGC-hardness for Max-Cut-Gain by Subhash Khot and Ryan O'Donnell Vol 4, Article 9 (pp 191-193) [COMMENT] On the LP Relaxation of the Asymmetric Traveling Salesman Path Problem Vol 4, Article 5 (pp 111-128) Approximation Algorithms for Unique Games Vol 4, Article 2 (pp 21-51) Optimal lower bounds for the Korkine-Zolotareff parameters of a lattice and for Schnorr's algorithm for the shortest vector problem Vol 4, Article 1 (pp 1-20) Single Source Multiroute Flows and Cuts on Uniform Capacity Networks Vol 3, Article 11 (pp 211-219) The Randomized Communication Complexity of Set Disjointness by Johan Håstad and Avi Wigderson Vol 3, Article 10 (pp 197-209) An   O(log n)   Approximation Ratio for the Asymmetric Traveling Salesman Path Problem by Chandra Chekuri and Martin Pál ■ Vol 3, Article 9 (pp 179-195) Approximation Algorithms and Online Mechanisms for Item Pricing Vol 3, Article 8 (pp 159-177) Removing Degeneracy May Require a Large Dimension Increase by Jiří Matoušek and Petr Škovroň Vol 3, Article 1 (pp 1-23) Censorship Resistant Peer-to-Peer Networks by Amos Fiat and Jared Saia Vol 2, Article 13 (pp 249-266) Correlation Clustering with a Fixed Number of Clusters Vol 2, Article 12 (pp 225-247) Matrix Approximation and Projective Clustering via Volume Sampling Vol 2, Article 11 (pp 207-224) Embedding the Ulam metric into ℓ1 Vol 2, Article 10 (pp 185-206) Learning Restricted Models of Arithmetic Circuits by Adam Klivans and Amir Shpilka Vol 2, Article 8 (pp 147-172) On Learning Random DNF Formulas Under the Uniform Distribution Vol 2, Article 7 (pp 137-146) An O(√n) Approximation and Integrality Gap for Disjoint Paths and Unsplittable Flow Vol 2, Article 4 (pp 65-90) Rank Bounds and Integrality Gaps for Cutting Planes Procedures Vol 2, Article 3 (pp 53-64) An Improved Approximation Ratio for the Covering Steiner Problem Vol 2, Article 2 (pp 19-51) Proving Integrality Gaps without Knowing the Linear Program Vol 1, Article 6 (pp 105-117) Combining Online Algorithms for Acceptance and Rejection by Yossi Azar, Avrim Blum, David P. Bunde, and Yishay Mansour
2014-10-23 16:30:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19749577343463898, "perplexity": 6809.388659656629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558067077.47/warc/CC-MAIN-20141017150107-00128-ip-10-16-133-185.ec2.internal.warc.gz"}
http://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/19/1/
# Properties Label 19.1 Level 19 Weight 1 Dimension 0 Nonzero newspaces 0 Newform subspaces 0 Sturm bound 30 ## Defining parameters Level: $$N$$ = $$19$$ Weight: $$k$$ = $$1$$ Nonzero newspaces: $$0$$ Newform subspaces: $$0$$ Sturm bound: $$30$$ ## Dimensions The following table gives the dimensions of various subspaces of $$M_{1}(\Gamma_1(19))$$. Total New Old Modular forms 9 9 0 Cusp forms 0 0 0 Eisenstein series 9 9 0 The following table gives the dimensions of subspaces with specified projective image type. $$D_n$$ $$A_4$$ $$S_4$$ $$A_5$$ Dimension 0 0 0 0 ## Hecke characteristic polynomials There are no characteristic polynomials of Hecke operators in the database
2020-10-01 01:35:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9545939564704895, "perplexity": 5544.115085505993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402130531.89/warc/CC-MAIN-20200930235415-20201001025415-00047.warc.gz"}
https://pypi.org/project/surface-dynamics/
Dynamics on surfaces ## Project description The surface_dynamics package for SageMath adds functionality related to interval exchange transformations, translation surfaces, mapping classes and more. It is based on SageMath and relies heavily on: • gmp or mpir for arbitrary precision arithmetic • PARI/GP for number field computations • GAP for finite groups representation and permutation groups • PPL (Parma Polyhedra Library) and LattE (Lattice point Enumeration) for polytope computations ## Prerequisites Installing surface_dynamics requires a working Sage installation (with Cython and gcc). Installing the optional SageMath packages gap_packages, and latte_int is recommended and will improve or extend the functionality in surface_dynamics. The optional package database_gap is also recommended if using SageMath < 8.6 (in SageMath 8.6 it was merged partly into the gap and partly into the gap_packages packages). ## Installation The module is distributed on PyPI and is easily installed through the Python package manager pip. If you downloaded a binary from the SageMath website (including the Cygwin version running on Windows) or compiled from source, run the following command: $sage -pip install surface_dynamics [--user] The --user option is optional and allows to install the module in your user space (and does not require administrator rights). If you use Debian or Ubuntu and you installed Sage through the operating system’s package manager (that is, the package sagemath), run these two commands: $ source /usr/share/sagemath/bin/sage-env $pip install surface_dynamics --user If you use Arch Linux, you need to install from source (see next section). ## Install and use source version This section provides detailed instructions on how to download, modify and install the development version of surface_dynamics. In all commands, • PIP has to be replaced by either pip, pip2, or sage -pip • PYTHON has to be replaced by either python, python2 or sage -python If you are an Arch Linux user with the sagemath package installed, use PIP=pip2 and PYTHON=python2. If you downloaded SageMath as a tarball or installed it from source use PIP='sage -pip' and PYTHON='sage -python'. You can install the latest development version in one line with: $ PIP install git+https://github.com/flatsurf/surface_dynamics [--user] As before, the --user option is optional and when specified will install the module in your user space. You can also perform a two stage installation that will allow you to modify the source code. The first step is to clone the repository: $git clone https://github.com/flatsurf/surface_dynamics The above command creates a repository surface_dynamics with the source code, documentation and miscellaneous files. You can then change to the directory thus created and install the surface dynamics module with: $ cd surface_dynamics $PIP install . [--user] Do not forget the . that refers to the current directory. When you don’t want to install the package or you are testing some modifications to the source code, a more convenient way of using surface dynamics is to do everything locally. To do so, you need to compile the module in place via: $ PYTHON setup.py build_ext --inplace Once done, you can import the surface_dynamics module. To check that you are actually using the right module (i.e. the local one) you can do in a SageMath session: sage: import surface_dynamics sage: surface_dynamics.__path__ # random ['/home/you/surface_dynamics/surface_dynamics/'] The result of the command must correspond to the path of the repository created by the command git clone given above. The compilation step PYTHON setup.py build_ext has to be redone each time you modify a C or Cython source file (i.e. with .c, .h, .pxd or .pyx extension). In other words, it is not needed if you only modify or create Python files (i.e. .py files). If you wish to install your custom version of surface_dynamics just use PIP as indicated before. ## Check After installing surface_dynamics, check that it works by launching Sage and typing the following commands. You should get the same output as below. sage: from surface_dynamics.all import * sage: o = Origami('(1,2)', '(1,3)') sage: o (1,2)(3) (1,3)(2) sage: o.sum_of_lyapunov_exponents() 4/3 sage: o.lyapunov_exponents_approx() # abs tol 0.05 [0.33441823619678734] sage: o.veech_group() Arithmetic subgroup with permutations of right cosets S2=(2,3) S3=(1,2,3) L=(1,2) R=(1,3) sage: q = QuadraticStratum(1, 1, 1, 1) sage: q.orientation_cover() H_5(2^4) sage: q.components() [Q_2(1^4)^hyp] sage: c = q.components()[0] sage: c Q_2(1^4)^hyp sage: c.orientation_cover_component() H_5(2^4)^odd sage: AbelianStrata(genus=3).list() [H_3(4), H_3(3, 1), H_3(2^2), H_3(2, 1^2), H_3(1^4)] sage: O = OrigamiDatabase() sage: q = O.query(("stratum", "=", AbelianStratum(2)), ("nb_squares", "=", 5)) sage: q.number_of() 2 sage: for o in q: ....: print("%s\n- - - - - - - -" % o) (1)(2)(3)(4,5) (1,2,3,4)(5) - - - - - - - - (1)(2)(3,4,5) (1,2,3)(4)(5) - - - - - - - - sage: Q12_reg.lyapunov_exponents_H_plus(nb_iterations=2**20) # abs tol 0.05 [0.6634, 0.4496, 0.2305, 0.0871] sage: Q12_reg.lyapunov_exponents_H_minus(nb_iterations=2**20) # abs tol 0.05 [1.0000, 0.3087, 0.1192] ## Installing development version - source code The development webpage is Assuming you have the program git on your computer, you can install the development version with the command: \$ sage -pip install git+https://github.com/flatsurf/surface_dynamics [--user] ## Contact For problems with macOS: samuel.lelievre@gmail.com ## Authors • Vincent Delecroix: maintainer • Samuel Lelièvre: origami and permutation representatives for quadratic strata • Charles Fougeron: Lyapunov exponents for strata coverings • Luke Jeffreys: single cylinder representatives for strata of Abelian differentials ## Citation To cite the software, use the following Bibtex entry: @manual{ Sdyn, Author = { Delecroix, V. et al. }, Month = { March }, Year = { 2019 }, Title = { surface_dynamics - SageMath package, Version 0.4.1 }, Doi = { 10.5281/zenodo.3237923 }, Url = { https://doi.org/10.5281/zenodo.3237923 } } ## Versions • 0.4.6 was released on 2021-03-13 (as a Python package on PyPI) • 0.4.5 was released on 2020-10-22 (as a Python package on PyPI) • 0.4.4 was released on 2020-01-31 (as a Python package on PyPI) • 0.4.3 was released on 2019-07-28 (as a Python package on PyPI) • 0.4.2 was released on 2019-06-21 (as a Python package on PyPI) • 0.4.1 was released on 2019-03-26 (as a Python package on PyPI) • 0.4.0 was released on 2018-05-14 (as a Python package on PyPI) • 0.3 was released on 2017-08-11 (as a Python package on PyPI) • 0.2 was released on 2015-11-15 (as a Sage spkg) • 0.1 was released on 2015-07-30 (as a Sage spkg) ## Project details Uploaded source
2022-09-29 10:57:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17938749492168427, "perplexity": 10993.255744461509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00446.warc.gz"}
http://forum.zkoss.org/question/21686/component-style-change-button-and-listbox-how-to-change-them/
# Component style change (button and listbox): how to change them? Emanuele 117 1 Hi everyone, I'm migrating to ZK 3.5. Good work had been done with this new version! But I'm facing some problems with component style. Changing button background-color I'm trying to change the background-color of a button. I looked to the style guide but I have not found where this style is defined (and style guide for ZK 3.5 is not available). However, also in ZK 3.0.5 it's not specified where the background-color is defined. See for ex. the following: http://www.zkoss.org/doc/styleguide/ch01s06.html Here the style for font are defined, but nothing else... The following is my example code: <button label="My button" style="color:white; background-color:red"/> With this code, the style attribute "color" works correctly (even not specified in style guide), but specifying "background-color" attribute, nothing changes. Any solutions? Another problem with listbox. I want to change the listbox header font-size, for ex. to 8pt. I have seen the style guide (see http://www.zkoss.org/doc/styleguide/ch01s21.html), and I tried with the following code: <style> font-size: 8pt; } font-size: 8pt; } </style> <listbox width="30%"> <listitem> <listcell label="Mickey"/> <listcell label="Mouse"/> </listitem> </listbox> I added prefix "z-" because I found this suggestion in forum: http://www.zkoss.org/forum/index.zul#path%3DlistComment%3BdiscussionId%3D5647%3BcategoryId%3D14%3B But again nothing changes. Any suggestions? Emanuele delete retag edit ## 3 Replies flyworld 155 3 zk 3.5 change almost all css class ,and the document is out of date .. .|| you can change your button like this <style> .z-button .z-button-tr,.z-button .z-button-tm,.z-button .z-button-tl, .z-button .z-button-cr,.z-button .z-button-cm,.z-button .z-button-cl, .z-button .z-button-br,.z-button .z-button-bm,.z-button .z-button-bl { color:white; background-image:none; background-color:red; } </style> <style> font-size: 8pt; } </style> Emanuele 117 1 It works! Thank you! Very helpful! But... if I don't want to specify the new style inside the style tag, is there a way to set it directly to the component, using setStyle? flyworld 155 3 unforunatly, some components have complex html structures... if you use setStyle , it will give the style to the outer html tag. for example <div style="style you set"> <div> <span> something you want to change</span> </div> </div> maybe you can use setSclass to set a CSS class to a component [hide preview]
2019-02-16 08:17:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2116616666316986, "perplexity": 11431.34645842056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479967.16/warc/CC-MAIN-20190216065107-20190216091107-00225.warc.gz"}
https://physics.stackexchange.com/questions/171250/rigorous-computation-of-four-velocity
Rigorous computation of four-velocity In Special Relativity we can consider spacetime to be the Minkowsky space $\mathbb{R}^{1,3}$ which is just $\mathbb{R}^4$ together with the non degenerate symmetric bilinear form $g$ that in some basis has the matrix $(g_{\mu\nu})=\operatorname{diag}(1,-1,-1,-1)$. In particular we can consider the path of a particle through this spacetime as simply a curve $\alpha : I\subset \mathbb{R}\to \mathbb{R}^{1,3}$. Such a curve can be written as $$\alpha(\tau) = (\alpha^0(\tau),\alpha^1(\tau),\alpha^2(\tau),\alpha^3(\tau))$$ With $\tau$ being the proper time (i.e. the time measured on the reference frame of the particle itself). In the classical language, one would refer to $\alpha$ as a position four-vector $\mathbf{R}=(ct,x,y,z)$. Following that classical line of thought we could simply in a non-rigorous way consider infinitesimal changes on the coordinates $dt,dx,dy,dz$ which induces a $d\mathbf{R}$ on the position. Dividing by $dt$ we would get $$\dfrac{d\mathbf{R}}{dt}=\left(c,\dfrac{dx}{dt},\dfrac{dy}{dt},\dfrac{dz}{dt}\right).$$ If one wanted them how $\mathbf{R}$ changes as $\tau$ changes then we would do $$\dfrac{d\mathbf{R}}{d\tau} = \dfrac{d\mathbf{R}}{dt}\dfrac{dt}{d\tau} = \gamma (c,\mathbf{v})$$ using time dilation $dt = \gamma d\tau$ and $\mathbf{v}$ as the usual velocity. Although this result is true I would like to know how to do it in the rigorous version. The problem in the rigorous version is that for $i=1,2,3$ the derivative of $\alpha^i$ with respect to $t$ is zero, because $\alpha^i$ is a function not defined on $\mathbb{R}^{1,3}$ where $t$ is a coordinate we can differentiate with respect to, it is defined in $I$ where we can only differentiate with respect to proper-time. Indeed in the rigorous version this $\mathbf{v}$ doesn't seem to make sense, because it seems to be the derivatives of the $\alpha^i$ with respect to $t$. So in the rigorous version, what is really this $\mathbf{v}$ and how does one show that $\alpha'(\tau) = \gamma(c,\mathbf{v})$? • This is essentially what i posted in: physics.stackexchange.com/a/171121/75518 Note that I used an arbitray parametrisation of the path and just later introduced proper time etc. – image Mar 19 '15 at 19:06 • Also: One does not in a non-rigorous way consider infinitesimal elements. Physicists tend to use that vague language in lectures without clear definitions. There's a exact sience about such stuff, called differential geometry. – image Mar 19 '15 at 19:25 Mathematically, since the coordinates $\alpha$ are functions of $\tau$, you can, assuming certain assumptions on the curve, reexpress the curve as a function from the time coordinate to the spacetime coordinates by writing $\alpha^\mu(t)=\alpha^\mu((\alpha^0)^{-1}(t))$, which is just a reparametrization of the curve from $\tau$ to $t$. The assumptions are in essence that the function $\alpha^0(\tau)$ be invertible, see last comment. They will always be satisfied, if you don't consider limit cases. Now to prove that $\alpha'(\tau)=\gamma(c,\vec{v})$, you decompose the total derivative with respect to $\tau$ by expressing it via a function of $t$ as you have done and it is rigorous since we have defined the functions $\alpha^\mu(t)$. Of course it wouldn't work when $\gamma \longrightarrow \infty$, but then you would be going at the speed of light. It is incorrect to say that we can only differentiate $\alpha^i$ with respect to proper time. We know that proper time is the parameter along the curve, i.e. a point $x$ on the curve is a function of the proper time $\tau$. This relation is given by the vector $\alpha=\alpha(\tau)$. However, we also have the relation $$\tau(x)=\int^x \mathrm{d}\tau$$ which is just the arc length formula for $\mathbb{R}^{3,1}$. Thus we can write $\alpha(x)=\alpha(\tau(x))$ and take derivatives with respect to coordinates. Note further the relation $$\eta_{\mu\nu}\frac{\mathrm{d}x^\mu}{\mathrm{d}\tau}\frac{\mathrm{d}x^\nu}{\mathrm{d}\tau}=1$$ which is to be solved for $\alpha^i$ in terms of $t$. Thus we can find the components of the curve all in terms of $t$, making the derivatives in the OP possible.
2019-08-22 04:33:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9647853970527649, "perplexity": 93.02218243399024}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316783.70/warc/CC-MAIN-20190822042502-20190822064502-00052.warc.gz"}
https://electronics.stackexchange.com/questions/583004/oscilloscope-low-side-connection/583008
# Oscilloscope low side connection I am taking an introductory circuits class where we are learning to measure resistances. I recently measured the resistance between the low-side of an oscilloscope and earth ground on a power supply and saw a resistance of 2.6ohms. I am not sure what to take away from this in terms of what the low-side of the oscilloscope is internally connected to. My professor also warned us that a carelessly unconnected low-side or black colored probe could blow out a powered IC chip but I do not understand why. Any help clearing up my confusion would be appreciated. • This is good: youtube.com/watch?v=xaELqAo4kkQ Aug 25 '21 at 3:12 • Take away from it that your probe is grounded, so don’t use it on mains or other potentials with respect to ground Aug 25 '21 at 9:03 All these points, marked with red arrows, are connected together, physically, with metal and wires: For example, the oscilloscope's "low-side" is the outer metalic shield on the female BNC connector at the front of the device, which is connected via the chassis and power cable of the oscilloscope directly to mains earth. The probe's ground clip is connected directly to its own male BNC connector's shield, which when plugged into the oscilloscope will also be connected to mains earth via the oscilloscope's chassis. The bench power supply's negative terminal may or may not be connected internally to mains earth. I'll assume it is in that picture, because I see no separate green earth terminal. Let's say you build the following circuit, powered from the bench supply, and you wish to measure the voltage across the LED using the oscilloscope: simulate this circuit – Schematic created using CircuitLab You've been told that the voltage across a green LED is about 2V, right? You simply want to see this for yourself. This looks innocent enough, until you draw in the "hidden" earth connections: simulate this circuit If you look carefully, you'll see that you completely short-circuited the resistor. There might as well be a direct wire across it, because that's exactly what is happening if you connect you oscilloscope's "low-side" to the bottom of the LED. Effectively we are applying the full 12V of the power supply directly across the LED, which will "release the magic smoke", and die a really horrible death. If you don't take great care, you can severely damage the oscilloscope too. Imagine the enormous current that would flow around the earth loop, via the oscilloscope, if you were to accidentally touch the probe's low-side clip directly to the power supply's positive terminal. The "take-away" is that you really can't use a mains-powered oscilloscpe to measure voltages across "any old thing", in the way you can with a battery-powered multimeter. You must always be aware that the probe's low-side is permanently connected to mains earth, and if any part of your circuit under test is also connected to mains earth, you have the "potential" for disaster (pun intended). Battery powered equipment has no "hidden" connections to mains earth, and therefore can't cause/suffer this kind of catastrophic destruction. There are of course other great ways to destroy perfectly good and expensive equipment. • Just to add that most (all?) of the benchtop power supplies I ever used were actually isolated from the mains ground. I'm sure there are non isolated ones but it might be good to make clear that there are very many ones (probably the majority) that actually are isolated. The one in your picture, for example, is isolated. Aug 25 '21 at 3:00 • @BeB00 That's a fair point, I hope the reader will read the "may or may not" part, instead of concluding that all power supplies have earthed negative terminals. I'll italicise that part. Aug 25 '21 at 3:09 • Many bench power supplies are DC-isolated but AC-connected to ground, usually with about 100 nF polyester film capacitor. – jpa Aug 25 '21 at 8:48 • Agreed that lab supplies are frequently floating, or at least often have three studs to optionally allow you to ground one or the other side of the output. – J... Aug 25 '21 at 12:01 • The first scope I used had individually isolated probes, so you actually could use them this way. We all picked up some bad habits that way, which we noticed we had to unlearn after we melted the grounding wire on one of the probes of our expensive new scope. Aug 26 '21 at 13:07 Yes, your prof is giving you good advice: ensure that you never connect scope ground to anything but something else that is also ground. Many systems die an early accidental death when that rule isn’t followed, for example, by an errant dangling scope ground brushing against some high-current component and shorting it out. Don't be that person, be very careful. Where does the scope ground go? To the instrument's grounded enclosure, which is connected to safety ground at the power inlet. This path is very low impedance: you measured 2.6 ohms; this is mostly from your meter's leads not the scope ground path, which will be lower than that. If you intend to make a differential measurement, use 2 probes in 'differential mode' (invert one, sum the channels.) Multi-channel scopes support this. Each probe is still grounded on its own. You can also use a differential probe (more \$, but better.) I am not sure why you were making that measurement but it lead you to ask a good question. Normally the low side or common of the probe is connected to earth via the third pin in the plug in the scope power cord. Touching that to a live circuit that has one side of it grounded will force the current through the scope grounds and if you are lucky it will blow something in the probe. The current would probably be limited by the wiring in the probe and scope, most are not designed to handle mains power. Your scope has sensitive electronic components and if a static discharge got the lead it could damage some static sensitive parts of the scope. This ground connection is common on many bench instruments but not battery powered portable instruments. Be careful when using bench top instrumentation. I [...] measured the resistance between the low-side of an oscilloscope and earth ground on a power supply and saw a resistance of 2.6ohms. I am not sure what to take away from this in terms of what the low-side of the oscilloscope is internally connected to. It is connected to the ground via an impedance with the real part = 2.6Ohm. That would be a good starting point, and an entirely reasonable observation. I tried it on my particular scope and probe, got Re(Z)=1.8Ohm. The actual impedance is complex of course, and the imaginary part may be significant. So it's up to you to decide how significant that imaginary part is. Start by assuming that the oscilloscope has to measure signals at any frequency within its bandwidth. So you have two limiting scenarios: f=0 and f=BW. Investigate both. Then you'll hopefully also realize that the impedance could be approximated by the resistance alone up to a certain threshold frequency. You begin by choosing some threshold above which you would say the imaginary part of the impedance is significant. That threshold frequency will let you split the bandwidth into two parts: one where the behavior is as-if only a resistive impedance was present, and another where the impedance is complex (can you guess whether it's more capacitive or more inductive?). As for what physical components are responsible for "making" this impedance - it can be quite a few, and many of them may be outside of the oscilloscope, within the building wiring. Recall that there's an inductance connected between the neutral and ground conductors at the outlet the scope is plugged to. That inductance is due to the neutral-ground bonding. Then recall that the oscilloscope has a power line input that likely places a capacitor between the neutral and ground. You could then investigate those independently, minding electrical safety of course. Don't expect the neutral-to-ground voltage to be zero, and don't plug any LCR meters between those terminals!
2022-01-17 23:40:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3697461783885956, "perplexity": 1393.8512147606089}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300624.10/warc/CC-MAIN-20220117212242-20220118002242-00457.warc.gz"}
http://math.stackexchange.com/questions/9768/volume-of-a-geodesic-ball
# Volume of a geodesic ball This may be embarassingly simple, but I can't see it. Let $M$ be a Riemannian manifold of dimension $n$; fix $x \in M$, and let $B(x,r)$ denote the geodesic ball in $M$ of radius $r$ centered at $x$. Let $V(r) = \operatorname{Vol}(B(x,r))$ be the Riemannian volume of $B(x,r)$. It seems to be the case that for small $r$, $V(r) \sim r^n$, i.e. $V(r)/r^n \to c$ with $0 < c < \infty$. How is this proved, and where can I find it? Given a neighborhood $U \ni x$ and a chart $\phi : U \to \mathbb{R}^n$, certainly $\phi$ has nonvanishing Jacobian, hence (making $U$ smaller if necessary) bounded away from 0. So $\operatorname{Vol}(\phi^{-1}(B_{\mathbb{R}^n}(\phi(x), r))) \sim r^n$. But I do not see how to relate the pullback $\phi^{-1}(B_{\mathbb{R}^n}(\phi(x), r))$ of a Euclidean ball to a geodesic ball in $M$. - A Google search for the title of the question finds this article where the first five coefficients in the series expansion of the volume (in powers of $r$) are computed. The first term is the same as the Euclidean volume (proportional to $r^n$, in other words); then come higher-order corrections depending on the curvature. - By the way, concerning your idea for a proof: instead of mapping a ball from an arbitrary chart, maybe you could map a ball from the tangent space using the exponential map, where you know more about the Jacobian. (I'm going to bed now, and haven't thought this through properly, but someone else can surely say something wise about this.) –  Hans Lundmark Nov 10 '10 at 21:58 The reference has a stronger result, which I think will be helpful as well. Thanks! –  Nate Eldredge Nov 10 '10 at 22:45 It's simple, all right. As I realized not long after posting (and as Hans also suggested), the key is the exponential map. The tangent space $T_x M$ gets an inner product space structure from the Riemannian metric; we can isometrically identify it with $\mathbb{R}^n$. Now $\exp_x : \mathbb{R}^n \to M$ is a diffeomorphism on some small ball $B_{\mathbb{R}^n}(0,\epsilon)$; on this ball, straight lines map to length-minimizing geodesics (see Do Carmo, Riemannian Geometry, Proposition 3.6), and thus Euclidean balls map to geodesic balls of the same radius. Taking $\epsilon$ smaller if necessary, we can assume the Jacobian of $\exp_x$ is bounded away from $0$ and $\infty$ on $B_{\mathbb{R}^n}(0, \epsilon)$; thus for $r < \epsilon$ we have that $\operatorname{Vol}(B(x,r))$ is comparable to $\operatorname{Vol}(B_{\mathbb{R}^n}(0,r)) \sim r^n$. -
2015-07-31 03:28:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.958504855632782, "perplexity": 182.12184040551583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988048.90/warc/CC-MAIN-20150728002308-00197-ip-10-236-191-2.ec2.internal.warc.gz"}
https://srikanthperinkulam.com/2015/04/17/wifi-freya/
# Enabling WiFi from the terminal After a few rough beta installs of the OS, I decided to do a clean install of  Freya. It’s been a smooth sail so far but for the intermittent WiFi disconnects. For some reason the WiFi gets soft disabled at times. Until I figure out what’s causing this to happen and for future reference here’s a nifty work-around: • Determine the current state of the radio transmitters using rfkill. You’ll be able to see immediately If any of your devices are soft blocked. $sudo rfkill list all • Unblock radio transmitters as needed. In my case, something was turning the Wi-Fi off. I just had to turn this on using the below command: $ rfkill unblock wifi Note: In a few laptops the Wi-Fi  key is also mapped to the Bluetooth function, so to activate or deactivate you’ll just have to tap a few times to disable or enable wifi and/or bluetooth.
2017-06-28 14:20:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3442409634590149, "perplexity": 2506.491072539582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323682.21/warc/CC-MAIN-20170628134734-20170628154734-00169.warc.gz"}
https://mathematica.stackexchange.com/questions/66451/how-to-get-this-grid-normal-height
# How to get this grid normal height? When I execute this Grid[{{x, x}, {SpanFromAbove, x}, {SpanFromAbove, x}}, ItemSize -> {{Automatic}, {Automatic, {1 -> Scaled[0.2], 2 -> Scaled[0.4], 3 -> Scaled[0.4]}}}, Frame -> All, Alignment -> {Center, Center}, Spacings -> {{2}, {1}}] I get an extremely tall grid instead of a normal height grid. Is this a bug or am I doing something wrong? Version 10.0.1 Win8.1 64bit # Update This just keeps getting worse. I am have used a Pane as some have suggested that the grid needs a sized container for it to work with Scaled. I get back a truncated grid. This is really baffling. Any ideas how to get this to work. Pane[ Grid[{{x, x}, {SpanFromAbove, x}, {SpanFromAbove, x}}, ItemSize -> {{{Scaled[0.5]}}, {Automatic, {1 -> Scaled[0.2], 2 -> Scaled[0.4], 3 -> Scaled[0.4]}}}, Frame -> All, Alignment -> {Center, Center}, Spacings -> {{1}, {1}}], ImageSize -> {432, 216}] It gives a cut off grid. ??? I can just make out the top of the "x" in the first column. It shouldn't be this difficult to have a grid a prescribed size and scale some rows within it. I must be doing something wrong. • Unles you wrap Grid with something with set size, Scaled will refer to WindowSize. – Kuba Nov 24 '14 at 12:40 • So I need to wrap it in a Pane or Panel and that will work? – Edmund Nov 24 '14 at 13:47 • @Edmund The qustion is, what do you need? – Kuba Nov 24 '14 at 13:47 • @Silvia Edmund set Item height not width. specx here is {Automatic} and it is width of items. But for specy he set explicitly 1->Scaled[.4] etc. what means "first row heigth 40% of enclosing region. – Kuba Nov 24 '14 at 13:51 • @Silvia But I agree, the whole layout management is broken. For example here, Scaled takes something different into account: Framed["x", ImageSize -> {Scaled@1, Scaled@1}] – Kuba Nov 24 '14 at 13:53 My advice, unless it is really basic problem, do not use SpanFrom~. Usually you will face "issues" like that. Next time try with nested Grids. Like I've said, the whole layout management is broken and everyone who tried do something more complex than simple grid with automatic options will agree. This is just another example, your case works if vertical heights sum up to: .5... Framed[ Grid[{ {x, x}, {SpanFromAbove, x}, {SpanFromAbove, x}}, ItemSize -> { {{Scaled@.499}}, (*specX*) {Scaled@.1, Scaled@.2, Scaled@.2} (*specY*) }, Frame -> All, Alignment -> {Center, Center}, Spacings -> {0, 0}] , ImageSize -> {432, 216}, FrameMargins -> 0, Alignment -> {Left, Top}]
2021-01-19 12:16:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2656819224357605, "perplexity": 5638.634443190326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703518240.40/warc/CC-MAIN-20210119103923-20210119133923-00021.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-11-infinite-series-11-3-convergence-of-series-with-positive-terms-exercises-page-556/30
## Calculus (3rd Edition) Given $$\sum_{n=1}^{\infty}\frac{n !}{n^{3}}$$ Compare with $\displaystyle\sum_{n=1}^{\infty}\frac{1}{n^{2}}$, which is a convergent series ( $p-$series with $p=2$) and for $n\geq 1$ \begin{align*} \frac{n !}{n^{3}}&=\frac{n \times(n-1) !}{n^{3}}\\ \frac{n !}{n^{3}}&=\frac{(n-1) !}{n^{2}}\\ & \geq \frac{1}{n^2} \end{align*} Then $\displaystyle\sum_{n=1}^{\infty} \frac{n !}{n^{3}}$ also converges.
2020-01-20 17:57:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9991952776908875, "perplexity": 4995.28278606694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599718.13/warc/CC-MAIN-20200120165335-20200120194335-00385.warc.gz"}
https://abaqus-docs.mit.edu/2017/English/SIMACAEELMRefMap/simaelm-c-connfrictionbehav.htm
# Connector friction behavior Frictional effects can be defined in any connector with available components of relative motion. A typical connector might have several pieces that are in relative motion and are contacting with friction. Therefore, both frictional forces and frictional moments may develop in the connector available components of relative motion. To define connector friction in Abaqus, you must specify the following: the friction law as governed by a friction coefficient; the contributions to the friction-generating connector contact forces or moments; and the local tangent direction in which the friction forces/moments act. The friction coefficient can be expressed in a general form in terms of slip rate, contact force, temperature, and field variables; defined by a static and kinetic term with a smooth transition zone defined by an exponential curve; and limited by a tangential maximum force, $Fm⁢a⁢x$, which is the maximum value of tangential force that can be carried by the connector before sliding occurs. Abaqus provides two alternatives for specifying the other aspects of friction interactions in connectors: Predefined friction interactions for which you need to specify a set of parameters that are characteristic of the connection type for which friction is modeled. Abaqus automatically defines the contact force contributions and the local tangent directions in which friction occurs. Predefined friction interactions represent common cases and are available for many connection types (see Connection types). If desired, known internal contact forces (such as from a press-fit assembly) can be defined as well. User-defined friction interactions for which you define all friction-generating contact force contributions and the local tangent directions along which friction occurs. The user-defined friction interactions can be used if predefined friction is not available for the connection type of interest or if the predefined friction interaction does not adequately describe the mechanism being analyzed. Although more complicated to utilize, user-defined interactions: are very general in nature due to flexibility in defining arbitrary sliding directions via connector potentials and contact forces via connector derived components; allow for the specification of sliding directions, contact forces, and additional internal contact forces as functions of connector relative position or motion, temperature, and field variables (the internal contact forces can also be dependent on accumulated slip); and allow for several friction definitions to be used in the same connection applied in different components of relative motion. The following topics are discussed: Related Topics About connectors Connector behavior Connector functions for coupled behavior In Other Guides *CHANGE FRICTION *CONNECTOR BEHAVIOR *CONNECTOR DERIVED COMPONENT *CONNECTOR FRICTION *CONNECTOR POTENTIAL *FRICTION Defining friction ProductsAbaqus/StandardAbaqus/ExplicitAbaqus/CAE
2022-10-02 15:20:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 167, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41012755036354065, "perplexity": 1614.5034804749635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00133.warc.gz"}
https://urwisebu.tk/binary-call-option-877721.html
July 14, 2020 60 Second Binary Options Strategy: the complete guide 10-09-2020 · A binary option (also known as all-or-nothing option) is a financial contract that entitles its holder to a fixed payoff when the event triggering the payoff occurs or zero payoff … ...read more Binary call option example ⭐ 2020's Best Trading Brokers 17-04-2019 · A call option that expires in one month has a strike price or $31. The cost of this option, called the premium, is$0.35. Each option contract controls 100 shares, so buying one option … ...read more Vanilla Option Definition example. The binary options trader decides the amount of money he wants example to bet and invests that amount when he buys the binary option. If the price is $0.25 then he stands to make$0.75 if the underlying moves as much as the investor hopes Example. ...read more Binary Options Greeks | Binary Trading This type of Binary Options is certainly the most used by all professional traders in the world. “Call / Put“, or High / Low, Binary Options were the first type of Binary Options to be introduced.Only at a later date were other types of options offered for Binary Trading. ...read more Binary option - Wikipedia Binary options traders can profit from the anticipated move to the upside with binary call options. Today’s binary options trading strategy suggests call options to be placed on dips below 1 ...read more Call and Put Options in Binary Trading code malaysiaIt's binary call option code Malaysia super simple-You can start to mine your first coins using our Bitcoin cloud mining service without having to do any of the hard stuff Sell your loot in rag hedging binary option with call spread Malaysia fairs best time to trade binary options in malaysia Malaysia and auctions. ...read more What Are Binary Options? - YouTube 07-06-2013 · How to Trade Binary Options. Binary options trading has become increasingly popular over the last decade. Day traders in particular access these markets with ease from their computers. Another draw is that entrance requires relatively ...read more The Collar Strategy in Binary Options | Binary Trading Binary option pricing. The payoff of binary options differ from those of regular options. Binary options either have a positive payoff or none. In the case of a binary call, if the price at a certain date, S T, is larger than or equal to a strike price K, it will generate a payoff Q.Notice, that it does not matter whether the future stock price just equals the strike, is somewhat larger or a ...read more code malaysia | What is possible profit With Binary Options it does not matter how far the asset moves, only as long as you predicted the direction properly. Call Options and Put Options are the most common type of Binary Option Trades. Other commonly known terms for Call Options or Put options are Up/Down Options or High/Low Options. ...read more 3 Ways to Understand Binary Options - wikiHow Double One Touch Binary Options Choosing the Call option means that you are predicting that the asset’s price will go up before the expiration time comes. Here’s an example how trading with a Call option works. A trader selects the USD/JPY currency pair which currently trades at 99.15. ...read more EUR/USD binary signal, Live stream — Binary-Signal.com Fig.3 – s Fair Value w.r.t. Implied Volatility The 1% delta in Figure 4 reflects this dramatic change of binary call price with the 1% delta profile showing zero delta followed by a sharply increasing delta as the binary call price changes dramatically over a small change in the underlying, followed by a sharply decreasing delta as the delta reverts to ...read more Binary options trading - Binary options trading - iqbroker.co 05-12-2017 · Binary options Pro signals service Price: When you receive the signal and after you wait the price level which time set for trading? 60 … ...read more Black–Scholes model - Wikipedia A typical binary option allows you to trade, in relation to the current market price. For instance, if EUR/USD is trading at 1.1500, you can only purchase a call (or put), with a strike price of 1.1500. Thus means your option will be in the money, if the exchange rate is … ...read more Trade Options Online: CFD - Real Time Quotes and Charts Touch & No Touch Options. When you start trading binary options, you’ll note you have access to several types of instruments. The most common and simplest among them are call and put trades. ...read more Options Calculator - Drexel University These are the most popular binary option trading words. Both these terms are related to primary asset price movement. The put option is a term that will predict the price decline of the underlying asset and the call option will predict the increase in the price of the underlying asset. ...read more Put and call option binary Gamma, represented by the Greek alphabet ‘γ’, plays an important part in the change of Delta when a binary call/put option nears the target price. The Gamma rises sharply when a binary option nears or crosses the target. In short, Gamma acts as an indicator for … ...read more Touch / No Touch Binary Options - OneTouch Binary Trading The Truth About 60 Second Binary Options WHAT ARE 60 SECOND BINARY OPTIONS60 Second Binary Options are a relatively new and dynamic trading innovationwhich fills a need for Binary Options traders who are looking to profit on quickmoves in the market.60 Second Binary Options allow a trader to execute call or put Binary Optionswhich expires in just one minute. ...read more Binary Call Option Price - dttodvo.com What are binary options. A binary option is a type of option with a fixed payout in which you predict the outcome from two possible results. If your prediction is correct, you receive the agreed payout. If not, you lose your initial stake, and nothing more. It's called 'binary' because there can be … ...read more
2021-12-09 04:41:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28340017795562744, "perplexity": 3079.004690782656}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363659.21/warc/CC-MAIN-20211209030858-20211209060858-00244.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-1-foundations-for-algebra-1-4-properties-of-real-numbers-practice-and-problem-solving-exercises-page-28/68
## Algebra 1: Common Core (15th Edition) $1/18$ We first multiply 5/9 by 10/10 and 5/10 by 9/9 (this is legal because 9/9 and 10/10 equal 1) to create a common denominator, which allows us to subtract the two expressions. Thus, we obtain: $\frac{50}{90}$-$\frac{45}{90}$=$\frac{5}{90}$=$\frac{1}{18}$
2022-05-24 12:46:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2918907403945923, "perplexity": 2160.5241379181643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662572800.59/warc/CC-MAIN-20220524110236-20220524140236-00075.warc.gz"}
https://wikimili.com/en/Net_(mathematics)
# Net (mathematics) Last updated In mathematics, more specifically in general topology and related branches, a net or MooreSmith sequence is a generalization of the notion of a sequence. In essence, a sequence is a function whose domain is the natural numbers. The codomain of this function is usually some topological space. ## Contents The motivation for generalizing the notion of a sequence is that, in the context of topology, sequences do not fully encode all information about functions between topological spaces. In particular, the following two conditions are, in general, not equivalent for a map f between topological spaces X and Y: 1. The map f is continuous in the topological sense; 2. Given any point x in X, and any sequence in X converging to x, the composition of f with this sequence converges to f(x) (continuous in the sequential sense). While it is necessarily true that condition 1 implies condition 2, the reverse implication is not necessarily true if the topological spaces are not both first-countable. In particular, the two conditions are equivalent for metric spaces. The concept of a net, first introduced by E. H. Moore and Herman L. Smith in 1922, [1] is to generalize the notion of a sequence so that the above conditions (with "sequence" being replaced by "net" in condition 2) are in fact equivalent for all maps of topological spaces. In particular, rather than being defined on a countable linearly ordered set, a net is defined on an arbitrary directed set. This allows for theorems similar to the assertion that the conditions 1 and 2 above are equivalent to hold in the context of topological spaces that do not necessarily have a countable or linearly ordered neighbourhood basis around a point. Therefore, while sequences do not encode sufficient information about functions between topological spaces, nets do, because collections of open sets in topological spaces are much like directed sets in behaviour. The term "net" was coined by John L. Kelley. [2] [3] Nets are one of the many tools used in topology to generalize certain concepts that may only be general enough in the context of metric spaces. A related notion, that of the filter, was developed in 1937 by Henri Cartan. ## Definitions Any function whose domain is a directed set is called a net where if this function takes values in some set ${\displaystyle X}$ then it may also be referred to as a net in ${\displaystyle X}$. Elements of a net's domain are called its indices. Explicitly, a net in ${\displaystyle X}$ is a function of the form ${\displaystyle f:A\to X}$ where ${\displaystyle A}$ is some directed set. A directed set is a non-empty set ${\displaystyle A}$ together with a preorder, typically automatically assumed to be denoted by ${\displaystyle \,\leq \,}$ (unless indicated otherwise), with the property that it is also (upward) directed, which means that for any ${\displaystyle a,b\in A,}$ there exists some ${\displaystyle c\in A}$ such that ${\displaystyle a\leq c}$ and ${\displaystyle b\leq c.}$ In words, this property means that given any two elements (of ${\displaystyle A}$), there is always some element that is "above" both of them (i.e. that is greater than or equal to each of them); in this way, directed sets generalize the notion of "a direction" in a mathematically rigorous way. The natural numbers ${\displaystyle \mathbb {N} }$ together with the usual integer comparison ${\displaystyle \,\leq \,}$ preorder form the archetypical example of a directed set. Indeed, a net whose domain is the natural numbers is a sequence because by definition, a sequence in ${\displaystyle X}$ is just a function from ${\displaystyle \mathbb {N} =\{1,2,\ldots \}}$ into ${\displaystyle X.}$ It is in this way that nets are generalizations of sequences. Importantly though, unlike the natural numbers, directed sets are not required to be total orders or even partial orders. Moreover, directed sets are allowed to have greatest elements and/or maximal elements, which is the reason why when using nets, caution is advised when using the induced strict preorder ${\displaystyle \,<\,}$ instead of the original (non-strict) preorder ${\displaystyle \,\leq }$; in particular, if a directed set ${\displaystyle (A,\leq )}$ has a greatest element ${\displaystyle a\in A}$ then there does not exist any ${\displaystyle b\in A}$ such that ${\displaystyle a (in contrast, there always exists some ${\displaystyle b\in A}$ such that ${\displaystyle a\leq b}$). Nets are frequently denoted using notation that is similar to (and inspired by) that used with sequences. A net in ${\displaystyle X}$ may be denoted by ${\displaystyle \left(x_{a}\right)_{a\in A},}$ where unless there is reason to think otherwise, it should automatically be assumed that the set ${\displaystyle A}$ is directed and that its associated preorder is denoted by ${\displaystyle \,\leq .}$ However, notation for nets varies with some authors using, for instance, angled brackets ${\displaystyle \left\langle x_{a}\right\rangle _{a\in A}}$ instead of parentheses. A net in ${\displaystyle X}$ may also be written as ${\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A},}$ which expresses the fact that this net ${\displaystyle x_{\bullet }}$ is a function ${\displaystyle x_{\bullet }:A\to X}$ whose value at an element ${\displaystyle a}$ in its domain is denoted by ${\displaystyle x_{a}}$ instead of the usual parentheses notation ${\displaystyle x_{\bullet }(a)}$ that is typically used with functions (this subscript notation being taken from sequences). As in the field of algebraic topology, the filled disk or "bullet" denotes the location where arguments to the net (i.e. elements ${\displaystyle a\in A}$ of the net's domain) are placed; it helps emphasize that the net is a function and also reduces the number of indices and other symbols that must be written when referring to it later. Nets are primarily used in the fields of Analysis and Topology, where they are used to characterize many important topological properties that (in general), sequences are unable to characterize (this shortcoming of sequences motivated the study of sequential spaces and Fréchet–Urysohn spaces). Nets are intimately related to filters, which are also often used in topology. Every net may be associated with a filter and every filter may be associated with a net, where the properties of these associated objects are closely tied together (see the article about Filters in topology for more details). Nets directly generalize sequences and they may often be used very similarly to sequences. Consequently, the learning curve for using nets is typically much less steep than that for filters, which is why many mathematicians, especially analysts, prefer them over filters. However, filters, and especially ultrafilters, have some important technical advantages over nets that ultimately result in nets being encountered much less often than filters outside of the fields of Analysis and Topology. A subnet is not merely the restriction of a net ${\displaystyle f}$ to a directed subset of ${\displaystyle A;}$ see the linked page for a definition. ## Examples of nets Every non-empty totally ordered set is directed. Therefore, every function on such a set is a net. In particular, the natural numbers with the usual order form such a set, and a sequence is a function on the natural numbers, so every sequence is a net. Another important example is as follows. Given a point ${\displaystyle x}$ in a topological space, let ${\displaystyle N_{x}}$ denote the set of all neighbourhoods containing ${\displaystyle x.}$ Then ${\displaystyle N_{x}}$ is a directed set, where the direction is given by reverse inclusion, so that ${\displaystyle S\geq T}$ if and only if ${\displaystyle S}$ is contained in ${\displaystyle T.}$ For ${\displaystyle S\in N_{x},}$ let ${\displaystyle x_{S}}$ be a point in ${\displaystyle S.}$ Then ${\displaystyle \left(x_{S}\right)}$ is a net. As ${\displaystyle S}$ increases with respect to ${\displaystyle \,\geq ,}$ the points ${\displaystyle x_{S}}$ in the net are constrained to lie in decreasing neighbourhoods of ${\displaystyle x,}$ so intuitively speaking, we are led to the idea that ${\displaystyle x_{S}}$ must tend towards ${\displaystyle x}$ in some sense. We can make this limiting concept precise. ## Limits of nets If ${\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}}$ is a net from a directed set ${\displaystyle A}$ into ${\displaystyle X,}$ and if ${\displaystyle S}$ is a subset of ${\displaystyle X,}$ then ${\displaystyle x_{\bullet }}$ is said to be eventually in ${\displaystyle S}$ (or residually in ${\displaystyle S}$) if there exists some ${\displaystyle a\in A}$ such that for every ${\displaystyle b\in A}$ with ${\displaystyle b\geq a,}$ the point ${\displaystyle x_{b}\in S.}$ A point ${\displaystyle x\in X}$ is called a limit point or limit of the net ${\displaystyle x_{\bullet }}$ in ${\displaystyle X}$ if (and only if) for every open neighborhood ${\displaystyle U}$ of ${\displaystyle x,}$ the net ${\displaystyle x_{\bullet }}$ is eventually in ${\displaystyle U,}$ in which case, this net is then also said to converges to/towards ${\displaystyle x}$ and to have ${\displaystyle x}$ as a limit. If the net ${\displaystyle x_{\bullet }}$ converges in ${\displaystyle X}$ to a point ${\displaystyle x\in X}$ then this fact may be expressed by writing any of the following: {\displaystyle {\begin{alignedat}{4}&x_{\bullet }&&\to \;&&x&&\;\;{\text{ in }}X\\&x_{a}&&\to \;&&x&&\;\;{\text{ in }}X\\\lim _{}\;&x_{\bullet }&&\to \;&&x&&\;\;{\text{ in }}X\\\lim _{a\in A}\;&x_{a}&&\to \;&&x&&\;\;{\text{ in }}X\\\lim _{}{}_{a}\;&x_{a}&&\to \;&&x&&\;\;{\text{ in }}X\\\end{alignedat}}} where if the topological space ${\displaystyle X}$ is clear from context then the words "in ${\displaystyle X}$" may be omitted. If ${\displaystyle \lim _{}x_{\bullet }\to x}$ in ${\displaystyle X}$ and if this limit in ${\displaystyle X}$ is unique (uniqueness in ${\displaystyle X}$ means that if ${\displaystyle y\in X}$ is such that ${\displaystyle \lim _{}x_{\bullet }\to y,}$ then necessarily ${\displaystyle x=y}$) then this fact may be indicated by writing ${\displaystyle \lim _{}x_{\bullet }=x}$       or       ${\displaystyle \lim _{}x_{a}=x}$       or       ${\displaystyle \lim _{a\in A}x_{a}=x}$ where an equals sign is used in place of the arrow ${\displaystyle \to .}$ [4] In a Hausdorff space, every net has at most one limit so the limit of a convergent net in a Hausdorff space is always unique. [4] Some authors instead use the notation "${\displaystyle \lim _{}x_{\bullet }=x}$" to mean ${\displaystyle \lim _{}x_{\bullet }\to x}$ without also requiring that the limit be unique; however, if this notation is defined in this way then the equals sign ${\displaystyle =}$ is no longer guaranteed to denote a transitive relationship and so no longer denotes equality. Specifically, without the uniqueness requirement, if ${\displaystyle x,y\in X}$ are distinct and if each is also a limit of ${\displaystyle x_{\bullet }}$ in ${\displaystyle X}$ then ${\displaystyle \lim _{}x_{\bullet }=x}$ and ${\displaystyle \lim _{}x_{\bullet }=y}$ could be written (using the equals sign ${\displaystyle =}$) despite it not being true that ${\displaystyle x=y.}$ Intuitively, convergence of this net means that the values ${\displaystyle x_{a}}$ come and stay as close as we want to ${\displaystyle x}$ for large enough ${\displaystyle a.}$ The example net given above on the neighborhood system of a point ${\displaystyle x}$ does indeed converge to ${\displaystyle x}$ according to this definition. Given a subbase ${\displaystyle {\mathcal {B}}}$ for the topology on ${\displaystyle X}$ (where note that every base for a topology is also a subbase) and given a point ${\displaystyle x\in X,}$ a net ${\displaystyle x_{\bullet }}$ in ${\displaystyle X}$ converges to ${\displaystyle x}$ if and only if it is eventually in every neighborhood ${\displaystyle U\in {\mathcal {B}}}$ of ${\displaystyle x.}$ This characterization extends to neighborhood subbases (and so also neighborhood bases) of the given point ${\displaystyle x.}$ If the set ${\displaystyle S:=\{x\}\cup \left\{x_{a}:a\in A\right\}}$ is endowed with the subspace topology induced on it by ${\displaystyle X,}$ then ${\displaystyle \lim _{}x_{\bullet }\to x}$ in ${\displaystyle X}$ if and only if ${\displaystyle \lim _{}x_{\bullet }\to x}$ in ${\displaystyle S.}$ In this way, the question of whether or not the net ${\displaystyle x_{\bullet }}$ converges to the given point ${\displaystyle x}$ is depends solely on this topological subspace ${\displaystyle S}$ consisting of ${\displaystyle x}$ and the image of (i.e. the points of) the net ${\displaystyle x_{\bullet }.}$ ### Limits in a Cartesian product A net in the product space has a limit if and only if each projection has a limit. Symbolically, suppose that the Cartesian product ${\displaystyle X:=\prod _{i\in I}X_{i}}$ of the spaces ${\displaystyle \left(X_{i}\right)_{i\in I}}$ is endowed with the product topology and that for every index ${\displaystyle i\in I,}$ the canonical projection to ${\displaystyle X_{i}}$ is denoted by ${\displaystyle \pi _{i}:X=\prod _{j\in I}X_{j}\to X_{i}}$     and defined by     ${\displaystyle \left(x_{j}\right)_{j\in I}\mapsto x_{i}.}$ Let ${\displaystyle f_{\bullet }=\left(f_{a}\right)_{a\in A}}$ be a net in ${\displaystyle X=\prod _{i\in I}X_{i}}$ directed by ${\displaystyle A}$ and for every index ${\displaystyle i\in I,}$ let ${\displaystyle \pi _{i}\left(f_{\bullet }\right)~:=~\left(\pi _{i}\left(f_{a}\right)\right)_{a\in A}}$ denote the result of "plugging ${\displaystyle f_{\bullet }}$ into ${\displaystyle \pi _{i}}$", which results in the net ${\displaystyle \pi _{i}\left(f_{\bullet }\right):A\to X_{i}.}$ It is sometimes useful to think of this definition in terms of function composition: the net ${\displaystyle \pi _{i}\left(f_{\bullet }\right)}$ is equal to the composition of the net ${\displaystyle f_{\bullet }:A\to X}$ with the projection ${\displaystyle \pi _{i}:X\to X_{i}}$; that is, ${\displaystyle \pi _{i}\left(f_{\bullet }\right):=\pi _{i}\,\circ \,f_{\bullet }.}$ If given ${\displaystyle L=\left(L_{i}\right)_{i\in I}\in X,}$ then ${\displaystyle f_{\bullet }\to L}$ in ${\displaystyle X=\prod _{i}X_{i}}$  if and only if   for every ${\displaystyle \;i\in I,}$${\displaystyle \;\pi _{i}\left(f_{\bullet }\right):=\left(\pi _{i}\left(f_{a}\right)\right)_{a\in A}\;\to \;\pi _{i}(L)=L_{i}\;}$ in ${\displaystyle \;X_{i}.}$ Tychonoff's theorem and relation to the axiom of choice If no ${\displaystyle L\in X}$ is given but for every ${\displaystyle i\in I,}$ there exists some ${\displaystyle L_{i}\in X_{i}}$ such that ${\displaystyle \pi _{i}\left(f_{\bullet }\right)\to L_{i}}$ in ${\displaystyle X_{i}}$ then the tuple defined by ${\displaystyle L:=\left(L_{i}\right)_{i\in I}}$ will be a limit of ${\displaystyle f_{\bullet }}$ in ${\displaystyle X.}$ However, the axiom of choice might be need to be assumed in order to conclude that this tuple ${\displaystyle L}$ exists; the axiom of choice is not needed in some situations, such as when ${\displaystyle I}$ is finite or when every ${\displaystyle L_{i}\in X_{i}}$ is the unique limit of the net ${\displaystyle \pi _{i}\left(f_{\bullet }\right)}$ (because then there is nothing to choose between), which happens for example, when every ${\displaystyle X_{i}}$ is a Hausdorff space. If ${\displaystyle I}$ is infinite and ${\displaystyle X=\prod _{j\in I}X_{j}}$ is not empty, then the axiom of choice would (in general) still be needed to conclude that the projections ${\displaystyle \pi _{i}:X\to X_{i}}$ are surjective maps. The axiom of choice is equivalent to Tychonoff's theorem, which states that the product of any collection of compact topological spaces is compact. But if every compact space is also Hausdorff, then the so called "Tychonoff's theorem for compact Hausdorff spaces" can be used instead, which is equivalent to the ultrafilter lemma and so strictly weaker than the axiom of choice. Nets can be used to give short proofs of both version of Tychonoff's theorem by using the characterization of net convergence given above together with the fact that a space is compact if and only if every net has a convergent subnet. ### Ultranets and cluster points of a net Let ${\displaystyle f}$ be a net in ${\displaystyle X}$ based on the directed set ${\displaystyle A}$ and let ${\displaystyle S}$ be a subset of ${\displaystyle X,}$ then ${\displaystyle f}$ is said to be frequently in (or cofinally in) ${\displaystyle S}$ if for every ${\displaystyle a\in A}$ there exists some ${\displaystyle b\in A}$ such that ${\displaystyle b\geq a}$ and ${\displaystyle f(b)\in S.}$ A point ${\displaystyle x\in X}$ is said to be an accumulation point or cluster point of a net if (and only if) for every neighborhood ${\displaystyle U}$ of ${\displaystyle x,}$ the net is frequently in ${\displaystyle U.}$ A net ${\displaystyle f}$ in set ${\displaystyle X}$ is called universal or an ultranet if for every subset ${\displaystyle S\subseteq X,}$${\displaystyle f}$ is eventually in ${\displaystyle S}$ or ${\displaystyle f}$ is eventually in ${\displaystyle X\setminus S.}$ Ultranets are closely related to ultrafilters. ## Examples ### Sequence in a topological space A sequence ${\displaystyle a_{1},a_{2},\ldots }$ in a topological space ${\displaystyle X}$ can be considered a net in ${\displaystyle X}$ defined on ${\displaystyle \mathbb {N} .}$ The net is eventually in a subset ${\displaystyle S}$ of ${\displaystyle X}$ if there exists an ${\displaystyle N\in \mathbb {N} }$ such that for every integer ${\displaystyle n\geq N,}$ the point ${\displaystyle a_{n}}$ is in ${\displaystyle S.}$ So ${\displaystyle \lim {}_{n}a_{n}\to L}$ if and only if for every neighborhood ${\displaystyle V}$ of ${\displaystyle L,}$ the net is eventually in ${\displaystyle V.}$ The net is frequently in a subset ${\displaystyle S}$ of ${\displaystyle X}$ if and only if for every ${\displaystyle N\in \mathbb {N} }$ there exists some integer ${\displaystyle n\geq N}$ such that ${\displaystyle a_{n}\in S,}$ that is, if and only if infinitely many elements of the sequence are in ${\displaystyle S.}$ Thus a point ${\displaystyle y\in X}$ is a cluster point of the net if and only if every neighborhood ${\displaystyle V}$ of ${\displaystyle y}$ contains infinitely many elements of the sequence. ### Function from a metric space to a topological space Consider a function from a metric space ${\displaystyle M}$ to a topological space ${\displaystyle X,}$ and a point ${\displaystyle c\in M.}$ We direct the set ${\displaystyle M\setminus \{c\}}$reversely according to distance from ${\displaystyle c,}$ that is, the relation is "has at least the same distance to ${\displaystyle c}$ as", so that "large enough" with respect to the relation means "close enough to ${\displaystyle c}$". The function ${\displaystyle f}$ is a net in ${\displaystyle X}$ defined on ${\displaystyle M\setminus \{c\}.}$ The net ${\displaystyle f}$ is eventually in a subset ${\displaystyle S}$ of ${\displaystyle X}$ if there exists some ${\displaystyle y\in M\setminus \{x\}}$ such that for every ${\displaystyle x\in M\setminus \{c\}}$ with ${\displaystyle d(x,c)\leq d(y,c)}$ the point ${\displaystyle f(x)}$ is in ${\displaystyle S.}$ So ${\displaystyle \lim _{x\to c}f(x)\to L}$ if and only if for every neighborhood ${\displaystyle V}$ of ${\displaystyle L,}$${\displaystyle f}$ is eventually in ${\displaystyle V.}$ The net ${\displaystyle f}$ is frequently in a subset ${\displaystyle S}$ of ${\displaystyle X}$ if and only if for every ${\displaystyle y\in M\setminus \{c\}}$ there exists some ${\displaystyle x\in M\setminus \{c\}}$ with ${\displaystyle d(x,c)\leq d(y,c)}$ such that ${\displaystyle f(x)}$ is in ${\displaystyle S.}$ A point ${\displaystyle y\in X}$ is a cluster point of the net ${\displaystyle f}$ if and only if for every neighborhood ${\displaystyle V}$ of ${\displaystyle y,}$ the net is frequently in ${\displaystyle V.}$ ### Function from a well-ordered set to a topological space Consider a well-ordered set ${\displaystyle [0,c]}$ with limit point ${\displaystyle t}$ and a function ${\displaystyle f}$ from ${\displaystyle [0,t)}$ to a topological space ${\displaystyle X.}$ This function is a net on ${\displaystyle [0,t).}$ It is eventually in a subset ${\displaystyle V}$ of ${\displaystyle X}$ if there exists an ${\displaystyle r\in [0,t)}$ such that for every ${\displaystyle s\in [r,t)}$ the point ${\displaystyle f(s)}$ is in ${\displaystyle V.}$ So ${\displaystyle \lim _{x\to t}f(x)\to L}$ if and only if for every neighborhood ${\displaystyle V}$ of ${\displaystyle L,}$${\displaystyle f}$ is eventually in ${\displaystyle V.}$ The net ${\displaystyle f}$ is frequently in a subset ${\displaystyle V}$ of ${\displaystyle X}$ if and only if for every ${\displaystyle r\in [0,t)}$ there exists some ${\displaystyle s\in [r,t)}$ such that ${\displaystyle f(s)\in V.}$ A point ${\displaystyle y\in X}$ is a cluster point of the net ${\displaystyle f}$ if and only if for every neighborhood ${\displaystyle V}$ of ${\displaystyle y,}$ the net is frequently in ${\displaystyle V.}$ The first example is a special case of this with ${\displaystyle c=\omega .}$ ## Properties Virtually all concepts of topology can be rephrased in the language of nets and limits. This may be useful to guide the intuition since the notion of limit of a net is very similar to that of limit of a sequence. The following set of theorems and lemmas help cement that similarity: • A subset ${\displaystyle S\subseteq X}$ is open if and only if no net in ${\displaystyle X\setminus S}$ converges to a point of ${\displaystyle S.}$ [5] It is this characterization of open subsets that allows nets to characterize topologies. • If ${\displaystyle S\subseteq X}$ is any subset then a point ${\displaystyle x\in X}$ is in the closure of ${\displaystyle S}$ if and only if there exists a net ${\displaystyle s_{\bullet }=\left(s_{a}\right)_{a\in A}}$ in ${\displaystyle S}$ with limit ${\displaystyle x\in X}$ and such that ${\displaystyle s_{a}\in S}$ for every index ${\displaystyle a\in A.}$ • A subset ${\displaystyle S\subseteq X}$ is closed if and only if whenever ${\displaystyle s_{\bullet }=\left(s_{a}\right)_{a\in A}}$ is a net with elements in ${\displaystyle S}$ and limit ${\displaystyle x\in X}$ in ${\displaystyle X,}$ then ${\displaystyle x\in S.}$ • A function ${\displaystyle f:X\to Y}$ between topological spaces is continuous at the point ${\displaystyle x}$ if and only if for every net ${\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}}$ with ${\displaystyle \lim _{}x_{\bullet }\to x}$ implies ${\displaystyle \lim {}_{a}f\left(x_{a}\right)\to f(x).}$ This theorem is in general not true if "net" is replaced by "sequence". We have to allow for directed sets other than just the natural numbers if X is not first-countable (or not sequential). • In general, a net in a space ${\displaystyle X}$ can have more than one limit, but if ${\displaystyle X}$ is a Hausdorff space, the limit of a net, if it exists, is unique. Conversely, if ${\displaystyle X}$ is not Hausdorff, then there exists a net on ${\displaystyle X}$ with two distinct limits. Thus the uniqueness of the limit is equivalent to the Hausdorff condition on the space, and indeed this may be taken as the definition. This result depends on the directedness condition; a set indexed by a general preorder or partial order may have distinct limit points even in a Hausdorff space. • The set of cluster points of a net is equal to the set of limits of its convergent subnets. • A net has a limit if and only if all of its subnets have limits. In that case, every limit of the net is also a limit of every subnet. • A space ${\displaystyle X}$ is compact if and only if every net ${\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}}$ in ${\displaystyle X}$ has a subnet with a limit in ${\displaystyle X.}$ This can be seen as a generalization of the Bolzano–Weierstrass theorem and Heine–Borel theorem. • If ${\displaystyle f:X\to Y}$ and ${\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}}$ is an ultranet on ${\displaystyle X,}$ then ${\displaystyle \left(f\left(x_{a}\right)\right)_{a\in A}}$ is an ultranet on ${\displaystyle Y.}$ ## Cauchy nets A Cauchy net generalizes the notion of Cauchy sequence to nets defined on uniform spaces. [6] A net ${\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}}$ is a Cauchy net if for every entourage ${\displaystyle V}$ there exists ${\displaystyle c\in A}$ such that for all ${\displaystyle a,b\geq c,}$${\displaystyle \left(x_{a},x_{b}\right)}$ is a member of ${\displaystyle V.}$ [6] [7] More generally, in a Cauchy space, a net ${\displaystyle x_{\bullet }}$ is Cauchy if the filter generated by the net is a Cauchy filter. A topological vector space (TVS) is called complete if every Cauchy net converges to some point. A normed space, which is a special type of topological vector space, is a complete TVS (equivalently, a Banach space) if and only if every Cauchy sequence converges to some point (a property that is called sequential completeness). Although Cauchy nets are not needed to describe completeness of normed spaces, they are needed to describe completeness of more general (possibly non-normable) topological vector spaces. ## Relation to filters A filter is another idea in topology that allows for a general definition for convergence in general topological spaces. The two ideas are equivalent in the sense that they give the same concept of convergence. [8] More specifically, for every filter base an associated net can be constructed, and convergence of the filter base implies convergence of the associated net—and the other way around (for every net there is a filter base, and convergence of the net implies convergence of the filter base). [9] For instance, any net ${\displaystyle \left(x_{a}\right)_{a\in A}}$ in ${\displaystyle X}$ induces a filter base of tails ${\displaystyle \{\{x_{a}:a\in A,a_{0}\leq a\}:a_{0}\in A\}}$ where the filter in ${\displaystyle X}$ generated by this filter base is called the net's eventuality filter. This correspondence allows for any theorem that can be proven with one concept to be proven with the other. [9] For instance, continuity of a function from one topological space to the other can be characterized either by the convergence of a net in the domain implying the convergence of the corresponding net in the codomain, or by the same statement with filter bases. Robert G. Bartle argues that despite their equivalence, it is useful to have both concepts. [9] He argues that nets are enough like sequences to make natural proofs and definitions in analogy to sequences, especially ones using sequential elements, such as is common in analysis, while filters are most useful in algebraic topology. In any case, he shows how the two can be used in combination to prove various theorems in general topology. ## Limit superior Limit superior and limit inferior of a net of real numbers can be defined in a similar manner as for sequences. [10] [11] [12] Some authors work even with more general structures than the real line, like complete lattices. [13] For a net ${\displaystyle \left(x_{a}\right)_{a\in A},}$ put ${\displaystyle \limsup x_{a}=\lim _{a\in A}\sup _{b\succeq a}x_{b}=\inf _{a\in A}\sup _{b\succeq a}x_{b}.}$ Limit superior of a net of real numbers has many properties analogous to the case of sequences. For example, ${\displaystyle \limsup(x_{a}+y_{a})\leq \limsup x_{a}+\limsup y_{a},}$ where equality holds whenever one of the nets is convergent. ## Citations 1. Moore, E. H.; Smith, H. L. (1922). "A General Theory of Limits". American Journal of Mathematics. 44 (2): 102–121. doi:10.2307/2370388. JSTOR   2370388. 2. ( Sundström 2010 , p. 16n) 3. Megginson, p. 143 4. Kelley 1975, pp. 65-72. 5. Howes 1995, pp. 83-92. 6. Willard, Stephen (2012), General Topology, Dover Books on Mathematics, Courier Dover Publications, p. 260, ISBN   9780486131788 . 7. Joshi, K. D. (1983), Introduction to General Topology, New Age International, p. 356, ISBN   9780852264447 . 8. R. G. Bartle, Nets and Filters In Topology, American Mathematical Monthly, Vol. 62, No. 8 (1955), pp. 551–557. 9. Aliprantis-Border, p. 32 10. Megginson, p. 217, p. 221, Exercises 2.53–2.55 11. Beer, p. 2 12. Schechter, Sections 7.43–7.47 ## Related Research Articles In mathematics, more specifically in functional analysis, a Banach space is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well defined limit that is within the space. In mathematics, a continuous function is a function that does not have any abrupt changes in value, known as discontinuities. More precisely, a function is continuous if arbitrarily small changes in its output can be assured by restricting to sufficiently small changes in its input. If not continuous, a function is said to be discontinuous. Up until the 19th century, mathematicians largely relied on intuitive notions of continuity, during which attempts such as the epsilon–delta definition were made to formalize it. In mathematical analysis, a metric space M is called complete if every Cauchy sequence of points in M has a limit that is also in M. In mathematics, a filter is a special subset of a partially ordered set. Filters appear in order and lattice theory, but can also be found in topology, from where they originate. The dual notion of a filter is an order ideal. In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members. The number of elements is called the length of the sequence. Unlike a set, the same elements can appear multiple times at different positions in a sequence, and unlike a set, the order does matter. Formally, a sequence can be defined as a function from natural numbers to the elements at each position. The notion of a sequence can be generalized to an indexed family, defined as a function from an index set that may not be numbers to another set of elements. In mathematics, the limit inferior and limit superior of a sequence can be thought of as limiting bounds on the sequence. They can be thought of in a similar fashion for a function. For a set, they are the infimum and supremum of the set's limit points, respectively. In general, when there are multiple objects around which a sequence, function, or set accumulates, the inferior and superior limits extract the smallest and largest of them; the type of object and the measure of size is context-dependent, but the notion of extreme limits is invariant. Limit inferior is also called infimum limit, limit infimum, liminf, inferior limit, lower limit, or inner limit; limit superior is also known as supremum limit, limit supremum, limsup, superior limit, upper limit, or outer limit. In mathematics, a topological vector space is one of the basic structures investigated in functional analysis. A topological vector space is a vector space which is also a topological space, this implies that vector space operations be continuous functions. More specifically, its topological space has a uniform topological structure, allowing a notion of uniform convergence. In mathematics, a limit point of a set in a topological space is a point that can be "approximated" by points of in the sense that every neighbourhood of with respect to the topology on also contains a point of other than itself. A limit point of a set does not itself have to be an element of There is also a closely related concept for sequences. A cluster point or accumulation point of a sequence in a topological space is a point such that, for every neighbourhood of there are infinitely many natural numbers such that This definition of a cluster or accumulation point of a sequence generalizes to nets and filters. In contrast to sets, for a sequence, net, or filter, the term "limit point" is not synonymous with a "cluster/accumulation point". The similarly named notions of a limit point of a filter, a limit point of a sequence, or a limit point of a net; each of these respectively refers to a point that a filter, sequence, or net converges to. In mathematics, the limit of a sequence is the value that the terms of a sequence "tend to", and is often denoted using the symbol. If such a limit exists, the sequence is called convergent. A sequence that does not converge is said to be divergent. The limit of a sequence is said to be the fundamental notion on which the whole of mathematical analysis ultimately rests. In mathematics, pointwise convergence is one of various senses in which a sequence of functions can converge to a particular function. It is weaker than uniform convergence, to which it is often compared. In functional analysis and related areas of mathematics, a sequence space is a vector space whose elements are infinite sequences of real or complex numbers. Equivalently, it is a function space whose elements are functions from the natural numbers to the field K of real or complex numbers. The set of all such functions is naturally identified with the set of all possible infinite sequences with elements in K, and can be turned into a vector space under the operations of pointwise addition of functions and pointwise scalar multiplication. All sequence spaces are linear subspaces of this space. Sequence spaces are typically equipped with a norm, or at least the structure of a topological vector space. In topology and related fields of mathematics, a sequential space is a topological space that satisfies a very weak axiom of countability. In topology and related areas of mathematics, a subnet is a generalization of the concept of subsequence to the case of nets. The definition is not completely straightforward, but is designed to allow as many theorems about subsequences to generalize to nets as possible. In mathematics, a càdlàg, RCLL, or corlol function is a function defined on the real numbers that is everywhere right-continuous and has left limits everywhere. Càdlàg functions are important in the study of stochastic processes that admit jumps, unlike Brownian motion, which has continuous sample paths. The collection of càdlàg functions on a given domain is known as Skorokhod space. In topology, a subfield of mathematics, filters are special families of subsets of a set that can be used to study topological spaces and define all basic topological notions such a convergence, continuity, compactness, and more. Filters also provide a common framework for defining various types of limits of functions such as limits from the left/right, to infinity, to a point or a set, and many others. Special types of filters called ultrafilters have many useful technical properties and they may often be used in place of arbitrary filters. In functional analysis and related areas of mathematics, a complete topological vector space is a topological vector space (TVS) with the property that whenever points get progressively closer to each other, then there exists some point towards which they all get closer to. The notion of "points that get progressively closer" is made rigorous by Cauchy nets or Cauchy filters, which are generalizations of Cauchy sequences, while "point towards which they all get closer to" means that this net or filter converges to Unlike the notion of completeness for metric spaces, which it generalizes, the notion of completeness for TVSs does not depend on any metric and is defined for all TVSs, including those that are not metrizable or Hausdorff. The strongest locally convex topological vector space (TVS) topology on the tensor product of two locally convex TVSs, making the canonical map continuous is called the projective topology or the π-topology. When is endowed with this topology then it is denoted by and called the projective tensor product of and In functional analysis and related areas of mathematics, a metrizable topological vector space (TVS) is a TVS whose topology is induced by a metric. An LM-space is an inductive limit of a sequence of locally convex metrizable TVS. In mathematics, particularly in functional analysis and topology, the closed graph theorem is a fundamental result stating that a linear operator with a closed graph will, under certain conditions, be continuous. The original result has been generalized many times so there are now many theorems referred to as "closed graph theorems." In mathematics, a convergence space, also called a generalized convergence, is a set together with a relation called a convergence that satisfies certain properties relating elements of X with the family of filters on X. Convergence spaces generalize the notions of convergence that are found in point-set topology, including metric convergence and uniform convergence. Every topological space gives rise to a canonical convergence but there are convergences, known as non-topological convergences, that do not arise from any topological space. Examples of convergences that are in general non-topological include convergence in measure and almost everywhere convergence. Many topological properties have generalizations to convergence spaces.
2021-07-26 05:18:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 456, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9340554475784302, "perplexity": 617.7609093522368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152000.25/warc/CC-MAIN-20210726031942-20210726061942-00486.warc.gz"}
https://de.maplesoft.com/support/help/maple/view.aspx?path=Statistics/SurvivalFunction
SurvivalFunction - Maple Help Statistics SurvivalFunction compute the survival function Calling Sequence SurvivalFunction(X, t, options) Parameters X - algebraic; random variable or distribution t - algebraic; point options - (optional) equation of the form numeric=value; specifies options for computing the survival function of a random variable Description • The SurvivalFunction function computes the survival function of the random variable X at the point t, which is defined as the probability that X takes a value greater than t. In other words, if $S\left(t\right)$ denotes the survival function of X and $F\left(t\right)$ denotes the cumulative distribution function of X, then $S\left(t\right)=1-F\left(t\right)$ for all real values of t. • The first parameter can be a distribution (see Statistics[Distribution]), a random variable, or an algebraic expression involving random variables (see Statistics[RandomVariable]). Computation • By default, all computations involving random variables are performed symbolically (see option numeric below). • For more information about computation in the Statistics package, see the Statistics[Computation] help page. Options The options argument can contain one or more of the options shown below. More information for some options is available in the Statistics[RandomVariables] help page. • numeric=truefalse -- By default, the survival function is computed using exact arithmetic. To compute the survival function numerically, specify the numeric or numeric = true option. Examples > $\mathrm{with}\left(\mathrm{Statistics}\right):$ Compute the survival function of the beta distribution with parameters p and q. > $\mathrm{SurvivalFunction}\left('\mathrm{Β}'\left(p,q\right),t\right)$ ${1}{-}\left(\left\{\begin{array}{cc}{0}& {t}{<}{0}\\ \frac{{{t}}^{{p}}{}{\mathrm{hypergeom}}{}\left(\left[{p}{,}{1}{-}{q}\right]{,}\left[{1}{+}{p}\right]{,}{t}\right)}{{\mathrm{Β}}{}\left({p}{,}{q}\right){}{p}}& {t}{<}{1}\\ {1}& {\mathrm{otherwise}}\end{array}\right\\right)$ (1) If p = 3 and q = 5, the plot of the survival function is as follows: > $\mathrm{plot}\left(\mathrm{SurvivalFunction}\left('\mathrm{Β}'\left(3,5\right),t\right),t=0..1\right)$ The survival function can also be evaluated directly using numeric parameters. > $\mathrm{SurvivalFunction}\left('\mathrm{Β}'\left(3,5\right),\frac{1}{2}\right)$ ${1}{-}\frac{{35}{}{\mathrm{hypergeom}}{}\left(\left[{-4}{,}{3}\right]{,}\left[{4}\right]{,}\frac{{1}}{{2}}\right)}{{8}}$ (2) > $\mathrm{simplify}\left(\right)$ $\frac{{29}}{{128}}$ (3) The numeric option gives a floating point result. > $\mathrm{SurvivalFunction}\left('\mathrm{Β}'\left(3,5\right),\frac{1}{2},\mathrm{numeric}\right)$ ${0.226562500000000}$ (4) Define new distribution. > $T≔\mathrm{Distribution}\left(\mathrm{=}\left(\mathrm{PDF},t↦\frac{1}{\mathrm{\pi }\cdot \left({t}^{2}+1\right)}\right)\right):$ > $X≔\mathrm{RandomVariable}\left(T\right):$ > $\mathrm{CDF}\left(X,t\right)$ $\frac{{\mathrm{\pi }}{+}{2}{}{\mathrm{arctan}}{}\left({t}\right)}{{2}{}{\mathrm{\pi }}}$ (5) > $\mathrm{SurvivalFunction}\left(X,t\right)$ ${1}{-}\frac{{\mathrm{\pi }}{+}{2}{}{\mathrm{arctan}}{}\left({t}\right)}{{2}{}{\mathrm{\pi }}}$ (6) > $\mathrm{plot}\left(,t=-10..10\right)$ Another distribution > $U≔\mathrm{Distribution}\left(\mathrm{=}\left(\mathrm{CDF},t↦F\left(t\right)\right),\mathrm{=}\left(\mathrm{PDF},t↦f\left(t\right)\right)\right):$ > $Y≔\mathrm{RandomVariable}\left(U\right):$ > $\mathrm{CDF}\left(Y,t\right)$ ${F}{}\left({t}\right)$ (7) > $\mathrm{SurvivalFunction}\left(Y,t\right)$ ${1}{-}{F}{}\left({t}\right)$ (8) References Stuart, Alan, and Ord, Keith. Kendall's Advanced Theory of Statistics. 6th ed. London: Edward Arnold, 1998. Vol. 1: Distribution Theory.
2023-03-30 18:52:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 26, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9622516632080078, "perplexity": 965.1233517207982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949355.52/warc/CC-MAIN-20230330163823-20230330193823-00276.warc.gz"}
http://clay6.com/qa/34898/the-cell-membranes-are-mainly-composed-of
Comment Share Q) # The cell membranes are mainly composed of $\begin {array} {1 1} (A)\;Fats & \quad (B)\;Proteins \\ (C)\;Phospholipids & \quad (D)\;Carbohydrates \end {array}$
2019-08-21 22:52:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1884622871875763, "perplexity": 10403.911644297137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316549.78/warc/CC-MAIN-20190821220456-20190822002456-00048.warc.gz"}
https://www.gamedev.net/forums/topic/42807-2d-and-3d-rendering-within-a-beginscene/
#### Archived This topic is now archived and is closed to further replies. # 2D and 3D rendering within a BeginScene() This topic is 6155 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I have a major problem: I can render textured tringles very well, then I tried to render normal surfaces on the back buffer, and then continue with textured triangles, the result is that my normal surfaces aren''t rendered always, they even take some time to reapering, this behavior is random in all my normal surface which are rendered to the back buffer. Then I try to call BeginScene() and render my triangles, then call EndScene(), and start drawing normal surfaces, then call agian BeginScene() | EndScene() to the rest of triangles, but the result was the same, my surfaces sometimes dissapear randomly >:[ So, What can I do? /\ /__\ C.Z. Hagen ##### Share on other sites 1. Don''t use more than one BeginScene and one EndScene per frame - on most cards this will kill your framerate, and on some such as the Neon and Kyro will lead to all sorts of strange things dissapearing, flickering etc. 2. Some drivers batch up all the triangles you draw in a frame between a BeginScene and EndScene call, so when you call Draw*Primitive*() nothing actually gets drawn. The real drawing happens later inside the EndScene() call (or concurrently after the call returns). 3. Those drivers which do batch trinagles between BeginScene and EndScene DO NOT batch up 2D drawing calls such as Blt and BltFast. Those drivers should also tell you this in the CAPS flags, look for the NO2DDURING3DSCENE. 4. The preferable way to mix 2D and 3D on *ALL* graphics cards is to do the 2D graphics as texture mapped quads. 5. When you perform a 2D operation which requires the contents of the backbuffer to be known, then all 3D drawing gets serialised so you lose all parellism between hardware graphics acceleration and the CPU. Doing any 2D, particularly Locks kills performance for this reason - expect at least a 20fps difference (thats why using GetDC and TextOut to write FPS counts is so bad) - its the equivilent of having a multiprocessor system then removing one of the processors (one is forced to wait for the other). 6. If you really must mix 2d and 3d, then your frame should look something like this to work reliably on all cards: void RenderAFrameOfMyCoolGame(){ pDev->BeginScene(); pDev->DrawPrimitive(...); // All 3D drawing and state setting goes here // *ABSOLUTELY NO* 2D drawing (Blt, BltFast, Lock) goes here pDev->EndScene(); // Do as much non graphics work here as possible // to give the graphics card a chance to finish drawing // the 3D part of the scene CalculateReallyComplexPhysicsStuff(); // All 2D drawing goes here // *ABSOLUTELY NO* 3D drawing here pDDS->Blt(..); pDDS->Flip(...); // or pDev->Present(...);} -- Simon O''''Connor Creative Asylum Ltd www.creative-asylum.com
2018-01-21 19:02:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19131480157375336, "perplexity": 4620.59791021076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890823.81/warc/CC-MAIN-20180121175418-20180121195418-00431.warc.gz"}
https://couryes.com/matlab-dai-xie-sta518/
## 数学代写|matlab代写|STA518 2022年12月27日 MATLAB是一个编程和数值计算平台,被数百万工程师和科学家用来分析数据、开发算法和创建模型。 couryes-lab™ 为您的留学生涯保驾护航 在代写matlab方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写matlab代写方面经验极为丰富,各种代写matlab相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 couryes™为您提供可以保分的包课服务 ## 数学代写|matlab代写|Generating a Movie Database We first need to come up with a method for characterizing movies. Table $4.1$ gives our system. MPAA stands for Motion Picture Association of America. It is an organization that rates movies. Other systems are possible, but this will be sufficient to test out our deep learning system. Three of the data points that will be used are strings and two are numbers. One number, length, is a continuum, while ratings have discrete values. The second number, quality, is based on the “stars” in the rating. Some movie databases, like IMDB, have fractional values because they average over all their users. We created our own MPAA ratings and genres based on our opinions. The real MPAA ratings may be different. Length can be any duration. We’ll use randn to generate the lengths around a mean of $1.8$ hours and a standard deviation of $0.15$ hours. Length is a floating-point number. Stars are one to five and must be integers. We created an Excel file with the names of 100 real movies, which is included with the book’s software. We assigned random genres and MPAA ratings (PG, R, and so forth) to them. We then saved the Excel file as tab-delimited text and search for tabs in each line. (There are other ways to import data from Excel and text files in MATLAB; this is just one example.) We then assign the data to the fields. The function will check to see if the maximum length or rating is zero, which it is for all the movies in this case, and then create random values. You can create a spreadsheet with rating values as an extension of this recipe. We use str2double since it is faster than str2num when you know that the value is a single number. fgetl reads in one line and ignores the end of line characters. You’ll notice that we check for NaN in the length and rating fields since ## 数学代写|matlab代写|Generating a Viewer Database Each watcher will have seen a fraction of the 100 movies in our database. This will be a random integer between 20 and 60 . Each movie watcher will have a probability for each characteristic: the probability that they would watch a movie rated one or five stars, the probability that they would watch a movie in a given genre, etc. (Some viewers enjoy watching so-called “turkeys”!) We will combine the probabilities to determine the movies the viewer has watched. For mPAA, genre, and rating, the probabilities will be discrete. For the length, it will be a continuous distribution. You could argue that a watcher would always want the highest-rated movie, but remember this rating is based on an aggregate of other people’s opinions and so may not directly map onto the particular viewer. The only output of this function is a list of movie numbers for each user. The list is in a cell array. We start by creating cell arrays of the categories. We then loop through the viewers and compute probabilities for each movie characteristic. We then loop through the movies and compute the combined probabbilities. This results in a list of movies watched by each viewèr. We use bar charts throughout. Notice how we make the $x$ labels strings for the genre and so on. We also rotate them 90 degrees for clarity. The length is the number of movies longer than the number on the $x$-axis. This data is based on our viewer model from a recipe in Section $4.3$ which is based on joint probabilities. We will train the neural net on a subset of the movies. This is a classification problem. We just want to know if a given movie would be picked or not picked by the viewer. We use patternnet to predict the movies watched. This is shown in the next code block. The input to patternnet is the sizes of the hidden layers, in this case, a single layer of size 40. We convert everything into integers. Note that you need to round the results since patternnet does not return integers, despite the label being an integer. patternnet has methods train and view. # matlab代写 ## 有限元方法代写 tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
2023-03-22 19:15:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3869563341140747, "perplexity": 1297.9633185124821}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00583.warc.gz"}
http://www.cs.cornell.edu/courses/cs2800/2017sp/lectures/lec17-chebychev.html
# Lecture 17: Chebychev's inequality and the weak law of large numbers • Chebychev's inequality • statement, proof, example • Weak law of large numbers • statement, proof, setting up sample space / random variables ## Chebychev's inequality Claim (Chebychev's inequality): For any random variable $$X$$, $Pr(|X - E(X)| \geq a) \leq \frac{Var(X)}{a^2}$ Proof: Note that $$|X - E(X)| \geq a$$ if and only if $$(X - E(X))^2 \geq a^2$$. Therefore $$Pr(|X - E(X)| \geq a) = Pr((X - E(X))^2 \geq a^2)$$. Applying Markov's inequality to the variable $$(X - E(X))^2$$ gives \begin{aligned} Pr(|X - E(X)| \geq a) &= Pr((X - E(X))^2 \geq a^2) \\ &\leq \frac{E((X - E(X))^2)}{a^2} \\ &= \frac{Var(X)}{a^2} \end{aligned} by definition. Example: Last time we used Markov's inequality and the fact that the average height is 5.5 feet to show that if a door is 55 feet high, then we are guaranteed that at least 90% of people can fit through it. If we also know that the standard deviation of height is $$σ = 0.2$$ feet, we can use Chebychev's inequality to build a smaller door. Let $$X$$ be the height random variable. $$Var(X) = σ^2 = 0.04$$. If $$x - E(X) \geq a$$ then $$|x - E(X)| \geq a$$. Therefore, the event $$(X - E(X))$$ is a subset of the event $$(|X - E(X)| \geq a)$$, and thus $$Pr(X - E(X) \geq a) \leq Pr(|X - E(X)| \geq a)$$. This lets us apply Chebychev's inequality to conclude $$Pr(X - E(X) \geq a) \leq \frac{Var(X)}{a^2}$$. Solving for $$a$$, we see that if $$a \geq .6$$, then $$Pr(X -E(X) \geq a) \leq 0.10$$. This in turn gives us $$Pr(X \lt a + E(X)) = Pr(X - E(X) \lt a) \geq 0.9$$. Thus, if the door is at least $$6.1$$ feet tall, then 90% of the people can fit through. ## Weak law of large numbers Suppose we wish to estimate the average value of the height of a population by sampling $$n$$ people from the population and averaging their height. The weak law of large numbers says that this will give us a good estimate of the "real" average. Formally, we can model this experiment by letting our outcomes be sequences of $$n$$ people. We can define several random variables: $$X_1$$ is the height of the first person sampled; $$X_2$$ is the height of the second person sampled, $$X_3$$ is the height of the third and so forth. Since these are all measures of height, $$E(X_1) = E(X_2) = \cdots = E(X_n)$$ (let's call this value $$\mu$$) and $$Var(X_1) = \cdots = Var(X_n)$$ (let's call this value $$\sigma^2$$). The result of our sampled average is given by the random variable $$(X_1 + X_2 + \cdots + X_n)/n$$. The weak law of large numbers says that this variable is likely to be close to the real expected value: Claim (weak law of large numbers): If $$X_1, X_2, \dots, X_n$$ are independent random variables with the same expected value $$\mu$$ and the same variance $$σ^2$$, then $Pr\left(\left|\frac{X_1 + X_2 + \cdots + X_n}{n} - μ\right| \geq a\right) \leq \frac{σ^2}{na^2}$ Proof: By Chebychev's inequality, we have $Pr\left(\left|\sum X_i/n - E(\sum X_i/n)\right| \geq a\right) \leq \frac{Var(\sum X_i/n)}{a^2}$ Now, by linearity of the expectation, we have $E(\sum X_i/n) = \sum E(X_i)/n = nμ/n = μ$ As was shown in homework 5, $$Var(cX) = c^2Var(X)$$, and we also know that if $$X$$ and $$Y$$ are independent, that $$Var(X + Y) = Var(X) + Var(Y)$$. Therefore, we have $Var(\sum X_i/n) = \sum Var(X_i)/n^2 = nσ^2/n^2 = σ^2/n$ Plugging these into the result from Chebychev's, we have $Pr\left(\left|\sum X_i/n - μ\right| \geq a\right) \leq \frac{σ}{na^2}$ which is what we were trying to show.
2018-12-14 16:31:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9821317195892334, "perplexity": 176.51567529041031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826145.69/warc/CC-MAIN-20181214162826-20181214184826-00567.warc.gz"}
https://byjus.com/resistance-calculator/
# Resistance Calculator Resistance Formula: R=ρ× LA Enter Length of the Wire:(L)= m Enter Wire Cross Area:(A)= m2 Enter Resistivity:(ρ)= Ω.m Enter Resistance:(R) = ohm(Ω) x = The Resistance Calculator an online tool which shows Resistance for the given input. Byju's Resistance Calculator is a tool which makes calculations very simple and interesting. If an input is given then it can easily show the result for the given number. #### Practise This Question Barrier methods are less effective when combined with a spermicide, a substance toxic to sperm. True or False?
2019-06-27 06:03:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8159080147743225, "perplexity": 8874.65314699584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000894.72/warc/CC-MAIN-20190627055431-20190627081431-00356.warc.gz"}
http://stackoverflow.com/questions/4033169/latex-cite-but-dont-reference?answertab=active
LaTeX: Cite, but don't reference I'm producing a set of documents in LaTeX, and I would like to provide a single, global bibliography page for the whole set. This is because each document is page-limited: I don't want to take up space with references at the bottom of each one. This means in each case, I would like to cite in the text, but not produce a reference at the end. I am using bibtex/natbib to handle the referencing. Simplest example: \documentclass[]{article} \bibliographystyle{/usr/share/texmf/bibtex/bst/natbib/plainnat.bst} \usepackage{natbib} \begin{document} In \citet*{MEF2010} I described the method. \bibliography{bibliography.bib} \end{document} How can I do this? Essentially I just want it to cite correctly: In Bloggs, Blagg and Blog (2010) I described the method. But not add a references section at the end. Any ideas? Thanks, David -
2014-03-10 07:22:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9059734344482422, "perplexity": 881.3831238676775}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010683198/warc/CC-MAIN-20140305091123-00037-ip-10-183-142-35.ec2.internal.warc.gz"}