content
stringlengths
86
994k
meta
stringlengths
288
619
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Spoonfeeding Field Equations Replies: 3 Last Post: Mar 4, 2013 8:38 PM Messages: [ Previous | Next ] Re: Spoonfeeding Field Equations Posted: Feb 26, 2013 3:34 PM On Feb 25, 11:53 am, Giovano di Bacco wrote: > Koobee Wublee wrote: > > They are [G]_00 (time) and [G]_11 (radial displacement) where [G] is the > > matrix that makes up the Einstein tensor. [G]_22 (longitude) and [G]_33 > > (latitude) are identical. <shrug> > so is only kinda scaling, What scaling? <shrug> > what about translation and rotation? In this case, [G], the Einstein tensor, is written in its spherically symmetric polar coordinate system. So, what translation and what rotation? <shrug> The field equations are discussed quite a bit. Not too many have brought up the subject on how the Christoffel symbols are derived since the Christoffel symbols are the basic building blocks to the field equations. In fact, among physicists, the Christoffel symbols are worshipped as a divine deity. Even fewer physicists nowadays understand how the Christoffel symbols are derived, and you cannot find a non-circular derivation in the textbooks any more. Fcvking sad, no? The following tells what the scientific communities are practicing. <shrug> ** FAITH IS LOGIC ** LYING IS TEACHING ** DECEIT IS VALIDATION ** NITWIT IS GENIUS ** OCCULT IS SCIENCE ** FICTION IS THEORY ** FUDGING IS DERIVATION ** PARADOX IS KOSHER ** WORSHIP IS STUDY ** BULLSHIT IS TRUTH ** ARROGANCE IS SAGE ** BELIEVING IS LEARNING ** IGNORANCE IS KNOWLEDGE ** MYSTICISM IS WISDOM ** SCRIPTURE IS AXIOM ** CONJECTURE IS REALITY ** HANDWAVING IS REASONING ** PLAGIARISM IS CREATIVITY ** PRIESTHOOD IS TENURE ** FRAUDULENCE IS FACT Date Subject Author 2/26/13 Re: Spoonfeeding Field Equations Koobee Wublee 3/4/13 Re: Spoonfeeding Field Equations Brian Q. Hutchings
{"url":"http://mathforum.org/kb/message.jspa?messageID=8423164","timestamp":"2014-04-16T10:56:37Z","content_type":null,"content_length":"18993","record_id":"<urn:uuid:a8b6f397-5a10-4b53-9646-f1a4eca40fb9>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the centroid of the solid? July 5th 2010, 03:25 AM #1 Find the centroid of the solid? The solid region is of uniform density between the xy-plane and the paraboloid z=4-(x²+y²) I found the centroid to be (0,0,4/3) Could somebody confirm this please? Many thanx. Yes, that is correct. In future, it would be better to show exactly what you did so that if there is an error we can point it out. July 5th 2010, 04:47 AM #2 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/calculus/150129-find-centroid-solid.html","timestamp":"2014-04-18T18:03:43Z","content_type":null,"content_length":"32360","record_id":"<urn:uuid:c193911a-8a93-4650-a206-b98ba5562845>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
Boundary terms and divergence theorom August 24th 2007, 02:30 PM #1 Junior Member Jun 2007 Boundary terms and divergence theorom Say I have a vector field $k^{\mu}$defined on a space such as $R^{2,p}$ Actually, it is AdS space im working on but I don't think that matters for what I want to ask. I assume that I can write the coordinates as (time,radial, p-angles) Now if I take the integral of the divergence of the field $\int \partial_{\mu}k^{\mu}dx^{p+2}$ then by the divergence theorem this gives zero if K dies of sufficiently quickly at spatial (and temporal)infinity i.e. (r --->infinity) Everything works as I want it to if I take it to die of as 1/r or quicker. Unless I have made a mistake, then any other possibility doesn't work like I think it should. Can any one reassure me about this. Last edited by ppyvabw; August 24th 2007 at 05:30 PM. Everything is surely ok if it has a compact support. But in this case it works fine, as it leaves "zero trace" near infinity. lol, sorry but I don't have a clue what you just said. I made my 'assumption' to make the answer do what I hoped it would and am relying on the words 'sufficiently quickly' But I still have to justify why I have assumed that a 'fall off' of 1/r is enough to ensure the divergence integrates to zero over spacetime. I mean, $1/sqrt(r)$ gives zero at infinity, but for my purposes it doesn't work and I am hoping it's because it doesn't satisfy the 'sufficiently quickly' condition. Ofcourse it doesn't. Just saw you have time+radial+space=2+p dimensions. I mean the divergence theorem applies when your vector field is zero near infinity (="compactly supported") over space+time. Are you interested in radial dimensions only? If yes, remember that you must have a finite integral over time also. Now the divergence cannot go to zero slower than $r^{p+2}$. Can you see why? Last edited by Rebesques; August 25th 2007 at 01:11 AM. Reason: old age Nope, I cant see why. Are you saying 1/r doesn't work? I can safely assume the integral vanishes at infinite time in my problem. It's the r bit that is the problem My expression for K at large r is $<br /> <br /> K^{\mu}=X^{\mu}r^{p-2{\lambda}-2}(-\partial_{t}\widetilde{\varphi}\partial_{t}\varphi + \widetilde{\varphi}\varphi r^{2}+\partial_{\Omega}\widetilde{\varphi}\partial _{\Omega}\ varphi+r^{2}m^{2}\widetilde{\varphi} \varphi)<br />$ Where $\varphi$ depends only on t and Omega , and X is a vector, possibly with some t and omega dependance I think but no r dependance. $\varphi$ can be safely assumed to die at temporal infinity sufficiently quickly. Now I want to arrange p and $\lambda$ so that $\int \partial_{\mu}k^{\mu}dx^{p+2}=0$. The second and third fourth terms have the highest power of r. These are what I am thinking I should arrange to be 1/r, and then it does what I want it to. I should also say, that the factor of sqrt(g) where g is the metric determinant is included in K, so there need not be any sqrt(g) in the integral, and it is not the covariant divergence that I'm interested in as K is a tensor density, not a tensor. Last edited by ppyvabw; August 25th 2007 at 11:15 AM. Oh, and by the way. X is a killing vector, so its covariant derivative is zero by killing's equation, and I am hoping that $p-2 \lambda =-1$ where $\lambda$ depends p and m and this condition will impose the correct restrictions on m. I am sort of managing to convince myself that I am right My expression for K at large r is ...can be safely assumed to die at temporal infinity sufficiently quickly. Why didn't you say so in the first place? This makes things easier. One last thing: Sure $\partial_{\mu}=(\partial_t\,\partial_r,\partial_{\ Omega})$?? I am kind of thinking the integral is 'seperable' and can just do the integral over r seperately, the point being that any terms involving 1/r or greater will integrate to something like $r^ {+something}$ which is infinity at r = infinity. Yeap, actually $\int \psi(t,r,\Omega) {\rm dx}^{p+2}=\int_{0}^{\infty}\left(\int_0^{\infty}\l eft(\int_{S_r}\psi(t,r,\Omega){\rm d\Omega}\right){\rm dr}\right){\rm dt}$ where $S_r$ is the p-sphere of radius r. And the order of integration cannot be changed. Now, there's something else I am worried about... Last edited by Rebesques; August 25th 2007 at 03:10 PM. Reason: F* apple and their crappy products. Arghhhhhh No, $\varphi$ independant of r, Im thinking $\int_{t} dt \int_{\Omega_p} d{\Omega} \int_r \partial_{\mu}K^{\mu} dr$. The expression for K that I gave is only the boundary behaviour. That expression is singular at r= 0 which is clearly wrong Why can't you change the order of integration in your expression? Man, you are letting me know one thing at a time 'Course you cannot change the order of integration, what is S_r if you haven't fixed r? lol, im not, I told you that here. $\Omega$ are just angles like the sperical polars $(r, \theta, \phi)$ except there's many more angles, which is why I am thinking they can be seperated out. I don't think r and omega are related in anyway. Not to worry, there's a lot of details that would take forever to write down. Thanks anyway. August 24th 2007, 02:50 PM #2 August 24th 2007, 05:29 PM #3 Junior Member Jun 2007 August 24th 2007, 05:35 PM #4 Junior Member Jun 2007 August 25th 2007, 01:06 AM #5 August 25th 2007, 04:49 AM #6 Junior Member Jun 2007 August 25th 2007, 05:03 AM #7 Junior Member Jun 2007 August 25th 2007, 08:35 AM #8 Junior Member Jun 2007 August 25th 2007, 02:27 PM #9 August 25th 2007, 02:40 PM #10 Junior Member Jun 2007 August 25th 2007, 02:51 PM #11 Junior Member Jun 2007 August 25th 2007, 03:03 PM #12 August 25th 2007, 03:12 PM #13 Junior Member Jun 2007 August 25th 2007, 04:02 PM #14 August 25th 2007, 04:49 PM #15 Junior Member Jun 2007
{"url":"http://mathhelpforum.com/calculus/18011-boundary-terms-divergence-theorom.html","timestamp":"2014-04-21T05:32:11Z","content_type":null,"content_length":"75212","record_id":"<urn:uuid:ac5c0ee2-aa51-4b13-90dc-b0e50f45654d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Find Rate of Change of Water Depth November 19th 2013, 03:19 PM #1 Find Rate of Change of Water Depth A hemispherical water tank with radius 6 meters is filled to a depth of h meters. The volume of the tank is given to be V = (1/3)pih(108-h^2), where 0 < h < 6. If water is being bumped into the tank at a rate of 3 cubic meters per minute, find the rate of change of the depth of the water when h = 2 meters. My Work: dr/dt = 3 I need dv/dt when h is 2. I then replaced h by r/3 and proceeded to substituted r/3 for h in the given formula before taking the implicit derivative. I simplified and found the new volume equation to be V = 12pi - (pi/81)*r^3. I differentiated my new volume formula implicitly, substituted r = 6 or the given radius of the water tank and simplified. My Answer is dv/dt = 32pi meters per minute. Book's answer is (3/32pi) meters per minute. How do I find the right answer? What are the steps for this question? Re: Find Rate of Change of Water Depth I haven't looked at your problem very hard yet but "If water is being bumped into the tank at a rate of 3 cubic meters per minute" This is talking about cubic metres which is a volume So i think dv/dt=3 and you want to find dh/dt when h=2 I got the same answer as the book. Last edited by Melody2; November 19th 2013 at 06:13 PM. Reason: I worked out the answer Re: Find Rate of Change of Water Depth It was the wrong set up on my part. It's hard knowing what we have and what we need to find from related rates applications. I will need to practice more questions. Re: Find Rate of Change of Water Depth Do you need any more help? Once you get the hang of these questions they are quite easy. Always use the units to help you know what you have been given and what you need to find. November 19th 2013, 06:08 PM #2 Nov 2013 November 19th 2013, 06:46 PM #3 November 19th 2013, 06:59 PM #4 Nov 2013
{"url":"http://mathhelpforum.com/calculus/224442-find-rate-change-water-depth.html","timestamp":"2014-04-17T14:57:12Z","content_type":null,"content_length":"39826","record_id":"<urn:uuid:9b1017d2-7b98-4a05-83de-db5864a5dee8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
Regular Pumping Lemmas Explaining the Game Starting the Game User Goes First Computer Goes First This game approach to the pumping lemma is based on the approach in Peter Linz's An Introduction to Formal Languages and Automata. JFLAP defines a regular pumping lemma to be the following. Let L be an infinite regular language. Then there exists some positive integer m such that any w that is a member of L with |w| ≥ m can be decomposed as w = xyz, |xy| ≤ m, |y| ≥ 1, such that w[i] = xy^iz, is also in L for all i = 0, 1, 2, .... In other words, any sufficiently long string in L can be broken down into three parts such that any number of repetitions of the middle part (pumping the middle part) will still yield in a string in Explaining the Game JFLAP treats the regular pumping lemma as a two-player game. In the game below, Player A is trying to find a xyz decomposition that is always in the language no matter what the i value is. Player B is trying to make it as hard as possible for player A to do so. If player B can pick a strategy such that he or she will always win regardless of player A's choices, meaning that no adequate decomposition exists, it is equivalent to proof that the language is not regular. The game is played like this: 1. Player A picks an integer for m. 2. Player B picks a string w such that w is a member of L and |w| ≥ m. 3. Player A picks the partition of w into xyz such that |xy| ≤ m and |y| ≥ 1. 4. Player B picks an integer i such that xy^iz is not a member of L. If player B can do so player B wins, otherwise, player A wins. There are two possible modes for the game, when the user goes first and when the computer goes first. In the first mode, the user is player A and the computer is player B, and thus the user should be trying to find an acceptable decomposition to pump. In the second mode, the user is player B and the computer is player A, and thus the user should be trying to prevent the computer from generating a valid decomposition. Starting the Game To start a regular pumping lemma game, select Regular Pumping Lemma from the main menu: You will then see a new window that prompts you both for which mode you wish to utilize and which language you wish to work on. The default mode is for the user to go first. A list of languages is also shown, some of which are regular and some of which are not regular. To proceed to a language, click on the appropriate “Select” button, and a new screen will come up based on the language chosen and the mode selected when the button was pushed. User Goes First Let's do an example for when the user goes first. Since the “You go first” option is already selected, we do not need to do anything in the top panel. Now we need to choose a language. Let's try the first language on the screen, L = {a^nb^n : n ≥ 0}. Go ahead and press the “Select” button to the right of the language. The following screen should come up. You may be curious about all the various items on the screen. We will touch all of them in time, but for now, let's play the game. We see that our objective in this mode is to find a valid partition, and that the first step that we need to preform is to choose a value for m. Go ahead and enter a fairly large value for m, such as 20, in the box where you are prompted. If the value is too large, the following message should appear in the panel where you entered m. JFLAP will only accept values for m in a finite range, both so m is large enough and so unnecessarily large values are avoided. Now, go ahead and enter “4”, which is a value of m in accepted range. The error message should disappear, and the following two panels should now be visible on the screen under the m panel. The computer has chosen a value for w, and now the game prompts you to decompose the string. You can adjust the x, y, and z substrings by using the sliders to set the boundaries between them. The first slider will set the boundary between x and y, and the second will set the boundary between y and z. If the decomposition is acceptable, the “Set xyz” button will become visible and the message at the bottom of the panel will inform you that you can set the button. If it is unacceptable, the message at the bottom will inform you what the problem is with the current decomposition. Let's have x and y both equal “aa”. Go ahead and slide the top slider over two spaces. You should notice that the bottom slider also moves too. Then, slide the bottom slider over two more spaces, and then click the visible “Set xyz” button. You should notice that the decomposition for each substring appears in the appropriate boxes as you move the sliders, in addition to the lengths of the substrings (note however that all fields may not be filled in if the decomposition is invalid). When finished, the message in the decomposition panel will disappear, the last two panels will become visible, and the whole screen will look like this. There are a number of things that you should notice. First, in the panel below the decomposition panel, we see that the computer has chosen a value for i, which is 0. It also shows the pumped string, which is “aabbbb”. The panel below informs you that you have lost, as the pumped string is not in the language. This panel also allows you to see an animation of how the pumped string is assembled. Press the “Step” button to see each step in the animation, and the button will lose its visibility when all steps have been completed. “Restart”, if visible, will restart the animation at the beginning. When the animation is finished, the bottom panel should look like this. You should also notice that there is new information in the text box in the top panel. What is listed here is all the attempts that you have made to win this game. Scroll down a little in this text box and you should see your decomposition, i value, and result (either Failed or Won) for each attempt. Since we have made only one attempt so far, only one attempt is listed. If we try a few more times, more attempts will be added to the list. The more recent the attempt, the closer it will be to the top of the list. There are a variety of ways to try again. If we are happy with our m and w values, we can simply choose a new decomposition in the decomposition panel and press the “Set xyz” button. If, however, we wish to change the m value, we can either enter a new value in the m panel directly or press the “Clear All” button, which is in the top panel, and then enter a new m value. If you press the “Clear All” button, notice that the list of attempts remains visible. After entering a new m value and after the computer picks a new w value, you would then choose a new decomposition. The visibility of all panels will be updated to reflect the current stage of the game. After a few attempts, you may notice that you never win, no matter what m value you choose or which decomposition you pick. However, if you don't spot the strategy that the computer is employing, you may wonder whether you are simply choosing your values poorly or whether it is indeed impossible to win. Once you have made enough attempts, feel free to press the “Explain” button in the top panel. You will now see, in the text box where the attempts are listed (you can still see them if you scroll down further), whether a valid partition exists. Below this information is an informal, intuitive proof about why there is or is not a valid partition. For this example, there is indeed no valid partition. To get rid of the proof and information about the possibility of a valid partition, simply click on the “Clear All” button. The example you just generated is available in regUserFirst.jff. Computer Goes First When finished, dismiss the tab, and you will return to the language selection screen. In the top panel, click on “Computer goes first.” Then, choose the first language, L = {a^nb^n. : n ≥ 0}, again. The following screen should come up. You will notice a few things different from when you went first. First, the objective has changed. Your purpose is to prevent the computer from finding a valid partition. Second, the messages for each step in the game have changed, reflecting the different tasks you must perform as player B. The computer has chosen a value for m within the permissible range, in this case 9. You are prompted for a value for w. Go ahead and enter the string “aaaaabbbbb”, which is the smallest string in the language where |w| ≥ m. (Tip: if you get a large value for m, such as 18, and you wish for a smaller one, press “Clear All” repeatedly until one is generated.) If for some reason you enter a string that is either not in the language or where |w| < m, the appropriate error message will appear in the w panel. When finished, press enter, and the following two panels will become visible. The computer has chosen a decomposition based on the w value that you entered. Now, given this decomposition, it is your duty to choose an i value. Go ahead and enter 13 for i. However, instead of progressing to the next panel, the following error message should appear. For all languages JFLAP supports, you can only enter either 0 or a value between 2 and 12 for i. A value of 1 will result in a pumped string equal to w, and thus is always in the language, so it is excluded. i values larger than 12 are excluded so the pumped string doesn't get too large. Thus, go ahead and enter 2 for i, which is in the permissible range. When finished, the screen should resemble the one below. This example is also available in regCompFirst.jff. You've won, because the computer could not choose a valid decomposition. In fact, as seen through the proof earlier, it never will find one with this language, so you will always win. Go ahead and make a few more attempts to familiarize yourself with the format, however. You can enter a new w value if you want to keep the same m value, or just press enter in the w text box if you want to keep both values. You can enter a new i value if you want to retain m, w, and the decomposition. Panel visibility will adjust itself accordingly. This concludes our brief tutorial on regular pumping lemmas. Thanks for reading!
{"url":"http://www.cs.duke.edu/csed/jflap/tutorial/pumpinglemma/regular/index.html","timestamp":"2014-04-16T10:17:28Z","content_type":null,"content_length":"14067","record_id":"<urn:uuid:8025ee22-d8c5-43da-921d-28dddc837967>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
compute eigenvalues for ill-conditioned matrix Hi fellows, Currently I am using the LAPACK routine DSYEVR to compute the eigenvalues for a positive (all diagonal nonnegative) symmetric real matrix, but encountered with some questions. Can anyone give me some It's a small N*N matrix, with N only ~20. However, for some matrix, the DSYEVR gives even ridiculous results. I observed that, for such matrix, the L1 condition number is larger than those giving expected results. For example, when condition number is below 30, it gives expected results; when it goes to 80, it starts to give wrong results, and when it goes to ridiculously large, say ~10^7, it gives very negative eigenvalues. BTW, I only need the smallest eigenvalue. So, I am wondering if there is any well-built routines or implementable methods for computing eigenvalues for ill-conditioned matrix. Thank you very much! Re: compute eigenvalues for ill-conditioned matrix anyone give some hints? It's said that Lapack can handle ill-conditioned matrix. But it's wired that, for an ill-conditioned matrix, different eigensolvers give very different results.. What would be the problem? Thanks a lot!
{"url":"https://icl.cs.utk.edu/lapack-forum/viewtopic.php?f=2&t=4428","timestamp":"2014-04-19T06:57:20Z","content_type":null,"content_length":"16177","record_id":"<urn:uuid:8eb583c0-6fe9-4b8d-9ceb-f25c9d7c86d5>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
pythagorean triples exercise Terry Reedy tjreedy at udel.edu Sat Oct 23 21:28:55 CEST 2010 On 10/23/2010 3:34 AM, Lawrence D'Oliveiro wrote: > In message<8idui6F213U1 at mid.individual.net>, Peter Pearson wrote: >> Is it important to let "a" range all the way up to b, instead of >> stopping at b-1? (tongue in cheek) > Makes no difference. :) The difference is that before one writes the restricted range, one must stop and think of the proof that a==b is not possible for a pythagorean triple a,b,c. (If a==b, c==sqrt(2)*a and the irrationality of sqrt(2) implies that c is also irrational and therefore not integral). The OP asked for how to translate the problem description into two loops, one nested inside the other, and I gave the simplest, obviously correct, brute-force search answer. If the problem specification had asked for primitive triples (no common factors), an additional filter would be required. Another respondent referred, I believe, to Euclid's formula However, it is not well suited to the problem specified. Terry Jan Reedy More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2010-October/590445.html","timestamp":"2014-04-17T01:26:07Z","content_type":null,"content_length":"3690","record_id":"<urn:uuid:92cd48b7-c1fa-44cd-96b1-be2bd5a12bdd>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] venn diagram help! February 22nd 2009, 11:55 PM #1 Feb 2009 Heyhey, can someone please help me with this question? The universal set, U and the two sets A and B cantained within are such that: -n(A)= 27 determine 1) n(AuB) 2) n((AuB)') I have so far found out that A,B and AnB don't add up to 70, how do you work out the actual number of A and B? thank you xxx Heyhey, can someone please help me with this question? The universal set, U and the two sets A and B cantained within are such that: -n(A)= 27 determine 1) n(AuB) 2) n((AuB)') I have so far found out that A,B and AnB don't add up to 70, how do you work out the actual number of A and B? thank you xxx Draw a sketch. February 23rd 2009, 12:25 AM #2
{"url":"http://mathhelpforum.com/discrete-math/75249-solved-venn-diagram-help.html","timestamp":"2014-04-21T08:45:29Z","content_type":null,"content_length":"33960","record_id":"<urn:uuid:05dbd97b-7d8d-45ca-9048-3b32803a58ff>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
Description of image file The illustration image006.gif shows the values in the select desriptor after the FETCH. They have the follwing values in order: SELN has a value of 3. SELF has a value of 3, set by DESCRIBE. SELS has three cells containing the addresses of SELSB(1), SELSB(2), and SELSB(3). SELM has three cells, each with a value of 5. SELC has three cells containing the values 5, 5, and 4. SELV has three cells containing the addresses of SELVB(1), SELVB(2), and SELVB(3). SELL has three cells containing the values 10, 5, and 9. SELT has three cells containing values 1, 1, and 1. SELI has three cells containing the addresses of SELIV(1), SELIV(2), and SELIV(3). There are also three data buffers which are addressed by the descriptor values. SELSB is an array for names of select-list items, and it has three rows and five columns. It contains the letters E, N, A, M, and E into the first row, E, M, P, N, and O into the second row, and C, O, M, and M into the third row. SELVB is an array for values of select list items, and it has three rows and 10 columns. In the first row, columns 1 through 6 contain the letters M, A, R, T, I, and N. In the second row, columns 2 through 5 contain the numbes 7, 6, 5, and 4. In the third row, columns 4 through 9 contain the characters 4, 8, 2, period, 5, and 0. SELIV is an array for values of indicators, and it has three rows and one column, all containing zeros set by FETCH.
{"url":"http://docs.oracle.com/cd/B14117_01/appdev.101/a42523/img_text/image012.htm","timestamp":"2014-04-16T08:44:20Z","content_type":null,"content_length":"1906","record_id":"<urn:uuid:89f7fbb0-7e73-4932-8b3c-6d940e695b89>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
Idledale Algebra 2 Tutor Find a Idledale Algebra 2 Tutor ...I may make up an example problem to help you but I refuse to do your homework as I am also a high school teacher. I am familiar with SAT math as I have helped tutor students for the past 2-3 years in this area. I have taken SAT three times (for English scores) and my own math scores ranged between 750-800. 7 Subjects: including algebra 2, chemistry, algebra 1, ACT Math ...I emphasize multimodal learning, and love to teach in a way that integrates subject areas with skill acquisition. I offer literacy intervention in phonics, fluency and comprehension. Mendi Y., interventionist at Legacy Elementary, spoke highly of my students' achievements; she said, "(Hannah) w... 53 Subjects: including algebra 2, English, reading, Spanish ...I appreciate diverse ways of thinking between individuals, and no two lessons are ever alike. When we sit down for tutoring, I hope that you are able to share what you have learned recently and what you might be confused about. Sometimes, nothing makes sense though, and I can help you work thro... 8 Subjects: including algebra 2, geometry, accounting, economics ...After graduating from college (UC Berkeley, where I studied anthropology and physics)in 1982, I worked in engineering administration for Lockheed Martin from 1983 to 1991. In 1992, when I was finishing an MBA degree (CU Denver, started while at Lockheed Martin), Dish Network hired me as an admin... 12 Subjects: including algebra 2, reading, English, writing I passed exams P, FM, and MFE in May 2010, August 2010, and November 2011, respectively. I have study manuals to help with studying if needed. I enjoy thinking about math and is willing to go beyond the study session (via email, text, or phone) when it is close to exam date. 5 Subjects: including algebra 2, statistics, algebra 1, probability
{"url":"http://www.purplemath.com/idledale_co_algebra_2_tutors.php","timestamp":"2014-04-21T10:34:29Z","content_type":null,"content_length":"23956","record_id":"<urn:uuid:e5b8b2f0-d297-45f2-aeb4-dfc306abea1d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
Electric charge and Coulomb's law • there are two kinds of charge, positive and negative • like charges repel, unlike charges attract • positive charge comes from having more protons than electrons; negative charge comes from having more electrons than protons • charge is quantized, meaning that charge comes in integer multiples of the elementary charge e • charge is conserved Probably everyone is familiar with the first three concepts, but what does it mean for charge to be quantized? Charge comes in multiples of an indivisible unit of charge, represented by the letter e. In other words, charge comes in multiples of the charge on the electron or the proton. These things have the same size charge, but the sign is different. A proton has a charge of +e, while an electron has a charge of -e. Electrons and protons are not the only things that carry charge. Other particles (positrons, for example) also carry charge in multiples of the electronic charge. Those are not going to be discussed, for the most part, in this course, however. Putting "charge is quantized" in terms of an equation, we say: q = n e q is the symbol used to represent charge, while n is a positive or negative integer, and e is the electronic charge, 1.60 x 10^-19 Coulombs. The Law of Conservation of Charge The Law of conservation of charge states that the net charge of an isolated system remains constant. If a system starts out with an equal number of positive and negative charges, thereıs nothing we can do to create an excess of one kind of charge in that system unless we bring in charge from outside the system (or remove some charge from the system). Likewise, if something starts out with a certain net charge, say +100 e, it will always have +100 e unless it is allowed to interact with something external to it. Charge can be created and destroyed, but only in positive-negative pairs. Table of elementary particle masses and charges: Electrostatic charging Forces between two electrically-charged objects can be extremely large. Most things are electrically neutral; they have equal amounts of positive and negative charge. If this wasnıt the case, the world we live in would be a much stranger place. We also have a lot of control over how things get charged. This is because we can choose the appropriate material to use in a given situation. Metals are good conductors of electric charge, while plastics, wood, and rubber are not. Theyıre called insulators. Charge does not flow nearly as easily through insulators as it does through conductors, which is why wires you plug into a wall socket are covered with a protective rubber coating. Charge flows along the wire, but not through the coating to you. Materials are divided into three categories, depending on how easily they will allow charge (i.e., electrons) to flow along them. These are: • conductors - metals, for example • semi-conductors - silicon is a good example • insulators - rubber, wood, plastic for example Most materials are either conductors or insulators. The difference between them is that in conductors, the outermost electrons in the atoms are so loosely bound to their atoms that theyıre free to travel around. In insulators, on the other hand, the electrons are much more tightly bound to the atoms, and are not free to flow. Semi-conductors are a very useful intermediate class, not as conductive as metals but considerably more conductive than insulators. By adding certain impurities to semi-conductors in the appropriate concentrations the conductivity can be well-controlled. There are three ways that objects can be given a net charge. These are: 1. Charging by friction - this is useful for charging insulators. If you rub one material with another (say, a plastic ruler with a piece of paper towel), electrons have a tendency to be transferred from one material to the other. For example, rubbing glass with silk or saran wrap generally leaves the glass with a positive charge; rubbing PVC rod with fur generally gives the rod a negative 2. Charging by conduction - useful for charging metals and other conductors. If a charged object touches a conductor, some charge will be transferred between the object and the conductor, charging the conductor with the same sign as the charge on the object. 3. Charging by induction - also useful for charging metals and other conductors. Again, a charged object is used, but this time it is only brought close to the conductor, and does not touch it. If the conductor is connected to ground (ground is basically anything neutral that can give up electrons to, or take electrons from, an object), electrons will either flow on to it or away from it. When the ground connection is removed , the conductor will have a charge opposite in sign to that of the charged object. An example of induction using a negatively charged object and an initially-uncharged conductor (for example, a metal ball on a plastic handle). (1) bring the negatively-charged object close to, but not touching, the conductor. Electrons on the conductor will be repelled from the area nearest the charged object. (2) connect the conductor to ground. The electrons on the conductor want to get as far away from the negatively-charged object as possible, so some of them flow to ground. (3) remove the ground connection. This leaves the conductor with a deficit of electrons. (4) remove the charged object. The conductor is now positively charged. A practical application involving the transfer of charge is in how laser printers and photocopiers work. This is a good web page that gives a nice description of how a photocopier works: Why is static electricity more apparent in winter? You notice static electricity much more in winter (with clothes in a dryer, or taking a sweater off, or getting a shock when you touch something after walking on carpet) than in summer because the air is much drier in winter than summer. Dry air is a relatively good electrical insulator, so if something is charged the charge tends to stay. In more humid conditions, such as you find on a typical summer day, water molecules, which are polarized, can quickly remove charge from a charged object. Try this at home See if you can charge something at home using friction. I got good results by rubbing a Bic pen with a piece of paper towel. To test the charge, you can use a narrow stream of water from a faucet; if the object attracts the stream when it's brought close, you know it's charged. All you need to do is to find something to rub - try anything made out of hard plastic or rubber. You also need to find something to rub the object with - potential candidates are things like paper towel, wool, silk, and saran wrap or other plastic. Coulomb's law The force exerted by one charge q on another charge Q is given by Coulomb's law: r is the distance between the charges. Remember that force is a vector, so when more than one charge exerts a force on another charge, the net force on that charge is the vector sum of the individual forces. Remember, too, that charges of the same sign exert repulsive forces on one another, while charges of opposite sign attract. An example Four charges are arranged in a square with sides of length 2.5 cm. The two charges in the top right and bottom left corners are +3.0 x 10^-6 C. The charges in the other two corners are -3.0 x 10^-6 C. What is the net force exerted on the charge in the top right corner by the other three charges? To solve any problem like this, the simplest thing to do is to draw a good diagram showing the forces acting on the charge. You should also let your diagram handle your signs for you. Force is a vector, and any time you have a minus sign associated with a vector all it does is tell you about the direction of the vector. If you have the arrows giving you the direction on your diagram, you can just drop any signs that come out of the equation for Coulomb's law. Consider the forces exerted on the charge in the top right by the other three: You have to be very careful to add these forces as vectors to get the net force. In this problem we can take advantage of the symmetry, and combine the forces from charges 2 and 4 into a force along the diagonal (opposite to the force from charge 3) of magnitude 183.1 N. When this is combined with the 64.7 N force in the opposite direction, the result is a net force of 118 N pointing along the diagonal of the square. The symmetry here makes things a little easier. If it wasn't so symmetric, all you'd have to do is split the vectors up in to x and y components, add them to find the x and y components of the net force, and then calculate the magnitude and direction of the net force from the components. Example 16-4 in the textbook shows this process. The parallel between gravity and electrostatics An electric field describes how an electric charge affects the region around it. It's a powerful concept, because it allows you to determine ahead of time how a charge will be affected if it is brought into the region. Many people have trouble with the concept of a field, though, because it's something that's hard to get a real feel for. The fact is, though, that you're already familiar with a field. We've talked about gravity, and we've even used a gravitational field; we just didn't call it a field. When talking about gravity, we got into the (probably bad) habit of calling g "the acceleration due to gravity". It's more accurate to call g the gravitational field produced by the Earth at the surface of the Earth. If you understand gravity you can understand electric forces and fields because the equations that govern both have the same form. The gravitational force between two masses (m and M) separated by a distance r is given by Newton's law of universal gravitation: A similar equation applies to the force between two charges (q and Q) separated by a distance r: The force equations are similar, so the behavior of interacting masses is similar to that of interacting charges, and similar analysis methods can be used. The main difference is that gravitational forces are always attractive, while electrostatic forces can be attractive or repulsive. The charge (q or Q) plays the same role in the electrostatic case that the mass (m or M) plays in the case of the gravity. A good example of a question involving two interacting masses is a projectile motion problem, where there is one mass m, the projectile, interacting with a much larger mass M, the Earth. If we throw the projectile (at some random launch angle) off a 40-meter-high cliff, the force on the projectile is given by: F = mg This is the same equation as the more complicated equation above, with G, M, and the radius of the Earth, squared, incorporated into g, the gravitational field. So, you've seen a field before, in the form of g. Electric fields operate in a similar way. An equivalent electrostatics problem is to launch a charge q (again, at some random angle) into a uniform electric field E, as we did for m in the Earth's gravitational field g. The force on the charge is given by F = qE, the same way the force on the mass m is given by F = mg. We can extend the parallel between gravity and electrostatics to energy, but we'll deal with that later. The bottom line is that if you can do projectile motion questions using gravity, you should be able to do them using electrostatics. In some cases, youıll need to apply both; in other cases one force will be so much larger than the other that you can ignore one (generally if you can ignore one, it'll be the gravitational force).
{"url":"http://physics.bu.edu/~duffy/PY106/Charge.html","timestamp":"2014-04-18T13:14:08Z","content_type":null,"content_length":"13013","record_id":"<urn:uuid:fbbb011e-d71c-43ac-b2f8-6c43a2329810>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
Definition of Areas In/between Tails | Chegg.com To compare two kinds of front-end design, six of each kind were installed on a certain make of compa... Show more To compare two kinds of front-end design, six of each kind were installed on a certain make of compact car. Then each car was run into a concrete wall at 5 miles per hour, and the following are the costs of the repairs (in dollars): Design 1: 127, 168, 143, 165, 122, 139 Design 2: 154,135, 132, 171,153,149 Use four steps to test at 0.01 level of significance whether the difference between the means of these two samples is significant. • Show less
{"url":"http://www.chegg.com/homework-help/definitions/areas-in-between-tails-31","timestamp":"2014-04-19T09:24:24Z","content_type":null,"content_length":"37894","record_id":"<urn:uuid:e9744389-bb04-499b-909e-b1c6388c9fec>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Problem Solving Replies: 5 Last Post: Feb 25, 1995 2:33 PM Messages: [ Previous | Next ] Re: Problem Solving Posted: Feb 25, 1995 7:58 AM >On Thu, 23 Feb 1995, David Scott Powell wrote: >> something out and see what happens. The question is do you use problem >> solving in your teaching? If yes, how? If no, why not? Then maybe if >> someone wants to look at the standards to see what they suggest that would >> be good too. Responces? >All my teaching consists of problem solving. >All my research also. >All my life also. >What is the point ? >Mmm, I understand. Standards... Well, they are vague about it. >Andrei Toom Andrei do you know what I mean by problem solving? What do you think it is? How do you use it? Are you sure they are vague about it or do you really know what is meant by problem solving? Scott Powell University Lab School 1776 University Ave. Honolulu, HI 96826
{"url":"http://mathforum.org/kb/message.jspa?messageID=1469845","timestamp":"2014-04-17T17:14:30Z","content_type":null,"content_length":"22655","record_id":"<urn:uuid:0fddbfe4-f793-4f6d-ae53-ef790f6cf689>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
My Biased Coin Today I get to celebrate that I'm two years done with my three year stint as "Area Dean for Computer Science" (chair). Whoever the new person is, they're supposed to start July 1, 2013. I've already starting encouraging the possible successors to step up. (Indeed, I've already started to become a bit more ambitious. I'm hoping to line up the next 3 or 4 people for the position; no tag backs for a decade...) It's not that I'm unhappy with the job. (I realize that statement is a no-op; I have to say that. I'm trying to get others to take over.) I'm pleased with what I've been able to do. We tenured two faculty last year (Hanspeter Pfister and Radhika Nagpal); we promoted another (Yiling Chen); we hired a new faculty member (Yaron Singer); and we've just had a junior faculty search for next year approved. We're (still) a relatively small department with a lot of demands on us, and I came in with a clear goal for us for faculty growth. I feel we've been successful in this regard. There have been some other nice successes, such as the new SEAS Design Fair I helped manage and organize, which I expect will be an annual event. And I'm able to act as the faculty interface with the administration in various ways, helping, I hope, keep things running But I'll also be happy to step down. I'm overdue for a sabbatical already (the price of agreeing to a 3-year job). I'll enjoy getting the time back for other things. (Though I expect I'm being unrealistic, and that some other committee or administrative task will try to absorb the time.) I like the "serve 3 years and out" model (though I see weaknesses in it too, in terms of setting up longer term infrastructures). I enjoy taking on new experiences and challenges, so trying out "management" (if that's what this job is) has been interesting and educational. But in one more year, another change will be good. Alistair Sinclair asked me to post the call at http://simons.berkeley.edu/cfp_summer2012.html for the Call for Proposals for Simons Institute programs. The deadline is mid-July. Worth noting -- two semester-long programs for Fall 2013 are already decided: these are "Real Analysis in Computer Science," organized by Gil Kalai, Subhash Khot, Michel Ledoux, Elchanan Mossel, Prasad Raghavendra and Luca Trevisan; and "Theoretical Foundations of Big Data Analysis," organized by Stephen Boyd, Peter Buehlmann, Michael Jordan, Ravi Kannan, Michael Mahoney and Muthu Muthukrishnan. Get your tickets now. Lance informs me we have newly elected officers at SIGACT. Chair: Paul Beame Venkatesan Guruswami Rocco Servedio Avrim Blum Tal Rabin Congratulations/condolences to the winners. I'm sure Paul will do a great job -- he's a regular commenter here, and I always find his opinions incredibly well thought out, even in (rare) cases where I disagree. I can't wait to see what he does. I'd also like to thank Lance for his service as chair the last few years. He had the unenviable job of keeping the community happy, preserving the structures that have served us well while trying to introduce new ideas where there looked to be room for improvement. I think he's done a great job, generating some controversy and discussion while keeping everything moving forward. He (and the other SIGACT volunteers) deserve our thanks. So, thanks! I received the following note from Mikkel Thorup and Alexandr Andoni. I believe it's appropriate to share: Dear friends of Mihai, We made a blog in Mihai's memory. Celebrating Mihai's energy and spirit, please cheer him with a glass of wine (or other spirits), and send in a picture to be posted on the page: Best regards, Mikkel and Alex Many of you have left wonderful comments about Mihai here. I hope you'll copy them over and possibly add to them at the memorial site. [** UPDATE **] Dear friends of Mihai, We made a blog in Mihai's memory. Celebrating Mihai's energy and spirit, please cheer him with a glass of wine (or other spirits), and send in a picture to be posted on the page: Best regards, Mikkel and Alex [** UPDATE END **] Mikkel Thorup just sent me the following to post regarding Mihai Patrascu. Mihai Patrascu, aged 29, passed away on Tuesday June 5, 2012, after a 1.5 year battle with brain cancer. Mihai's carreer was short but explosive, full of rich and beautiful ideas as witnessed, e.g., in his 19 STOC/FOCS papers. Mihai was very happy about being co-winner of the 2012 EATCS Presburger Young Scientist Award for his ground-breaking work on data structure lower-bounds. It was wonderful that the community stood up to applaud this achievement at STOC'12. Unfortunately he will not make it to the award ceremony on July 10 at ICALP. Mihai's appreciation for the award shows in the last post on his blog I was fortunate enough to be one of Mihai's main collaborators. One of the things that made it possible to work on hard problems was having lots of fun: playing squash, going on long hikes, and having beers celebrating every potentially useful idea. On this last note, Mihai's wife Mira tells me that she does not want any flowers and that his funeral will be back in Romania. However, she wants people to have a glass of wine in Mihai's memory, thinking about him as the inspired and fun young man that he was. [Justin talks about his upcoming work, to be presented at HotCloud.] For the past few years, Michael and I, along with our awesome collaborators, have worked towards developing practical protocols for verifying outsourced computations. In roughly chronological order, the relevant papers are here , here , here , here , and most recently here . In this last paper (joint work with Mike Roberts and Hanspeter Pfister ), we really tried to push these protocols into practice, largely by taking advantage of their inherent parallelizability: we'll be presenting it at HotCloud next week, and it is the main impetus for this blog post. My hope here is to give a (somewhat) brief, unified overview of what we've accomplished with this line of work, and how it relates to some exciting parallel lines of inquiry by other researchers. Our main motivation is that of Alice, who stores a large data set on the cloud, and asks the cloud to perform a computation on the data (say, to compute the shortest path between two nodes in a large graph, or to solve a linear program defined over the data). Of course, Alice may be a company or an organization, rather than an individual. The goal is to provide Alice with a guarantee that the server performed the requested computation correctly, without requiring Alice to perform the requested computations herself, or even to maintain a local copy of the data (since Alice may have resorted to the cloud in the first place because she has more data than she can store). In short, we want to save Alice as much time and space as possible, while also minimizing the amount of extra bookkeeping that the cloud has to do to prove the integrity of the computation. Alice may want such integrity guarantees because she is concerned about simple errors, like dropped data, hardware faults, or a buggy algorithm, or she may be more paranoid and fear that the cloud is deliberately deceptive or has been externally compromised. So ideally we'd like our protocols to be secure against arbitrarily malicious clouds, but sufficiently lightweight for use in more benign settings. This is an ambitious goal, but achieving it could go a long way toward mitigating trust issues that hinder the adoption of cloud computing solutions. Surprisingly powerful protocols for verifiable computation were famously discovered within the theory community several decades ago, in the form of interactive proofs , PCPs , and the like. These results are some of the brightest gems of complexity theory, but as of a few years ago they were mainly theoretical curiosities, far too inefficient for actual deployment (with the notable exception of certain zero-knowledge proofs). We've been focusing on interactive proof methods, and have made substantial strides in improving their efficiency. One direction we've focused on is the development of highly optimized protocols for specific important problems, like reporting queries (what value is stored in memory location x of my database?), matrix multiplication, graph problems like perfect matching, and certain kinds of linear programs. Many of these are provably optimal in terms of space and communication costs, consist of a single message from the cloud to Alice (which can be sent as an email attachment or posted on a website), and already save Alice considerable time and space while imposing minimal burden on the cloud, both in theory and experimentally. But for the rest of this post I will focus on *general-purpose* methods, which are capable of verifying arbitrary computations. The high-order insights of this line of work are as follows. The statements below have precise theoretical formulations, but I'm referring to actual experimental results with a full-blown implementation. Note that a lot of engineering work went into making our implementations fast, like choosing the "right" finite field to work over, and working with the right kinds of circuits. 1) We can save Alice substantial amounts of space essentially for free. The reason is that existing interactive proof protocols (such as Interactive Proofs for Muggles by Goldwasser, Kalai, and Rothblum, which is the protocol underlying our implementation) only require Alice to store a fingerprint of the data. This fingerprint can be computed in a single, light-weight streaming pass over the input (say, while Alice uploads her data to the cloud), and serves as a sort of "secret" that Alice can use to catch the cloud in a lie. The fingerprint doesn't even depend on the computation being outsourced, so Alice doesn't need to know what computation she's interested in until well after she's seen the input, and she never needs to store the input locally. 2) Our implementation already saves Alice a lot of time relative to doing the computation herself. For example, when multiplying two 512x512 matrices, Alice requires roughly a tenth of a second to process the input, while naive matrix multiplication takes about seven times longer. And the savings increase substantially at larger input sizes (as well as when applying our implementation to more time-intensive computations than matrix multiplication) since Alice's runtime in the protocol grows roughly linearly with the input size. So I'd argue that verifiable computation is essentially a solved problem in settings where the main focus is saving Alice time, and the runtime of the cloud is of secondary importance. At least this the case for problems solvable by reasonably small-depth circuits, for which our implementation is most efficient. 3) We've come a long way in making the prover more efficient. Theoretically speaking, in our ITCS paper with Graham Cormode , we brought the runtime of the cloud down from polynomial in the size of a circuit computing the function of interest, to quasilinear in the size of the circuit. Practically speaking, a lot of work remains to be done on this aspect (for example, our single-threaded cloud implementation takes about 30 minutes to multiply two 256 x 256 matrices, and matrix multiplication is a problem well-suited to these sorts of protocols), but we are in much better shape than we were just a few years ago. 4) All of the protocols (special-purpose and general-purpose alike) are extremely amenable to parallelization on existing hardware. This holds for both Alice and the cloud (although Alice runs extremely quickly even without parallelization, see Point 1). For example, using GPUs we can bring the runtime of the cloud to < 40 seconds when multiplying two 256 x 256 matrices. Obviously this is still much slower than matrix multiplication without integrity guarantees, but we're now just one or two orders of magnitude away from undeniable usefulness. The extended abstract appearing in HotCloud (which should be viewed largely as an advertisement for the arxiv version) can be found here . We tried hard to give an accessible, if very high level, overview of the powerful ideas underlying interactive proofs, which I hope will be useful for researchers who are encountering verifiable computation for the first time. Slides describing the entirety of this line of work in more detail can be found here . I want to close by mentioning two exciting lines of work occurring in parallel with our own. First, Ben-Sasson, Chiesa, Genkin, and Tromer are working toward developing practical PCPs, or probabilistically-checkable proofs (see their new paper here ). The PCP setting is much more challenging than the interactive proof setting we have been working in above: in a PCP, there is no interaction to leverage (i.e. the cloud sends a single message to Alice), and moreover Alice is only permitted to look at *a few bits* of the proof. The latter may seem like an artificial constraint that doesn't matter in real outsourcing scenarios, but it turns out that building a practical PCP system would buy you quite a bit. This is because one can throw certain cryptographic primitives on top of a PCP system (like collision-resistant hash functions) and get a wide variety of powerful protocols, such as succinct arguments for all of NP (i.e., protocols requiring very little communication, which are secure against computationally bounded adversaries). The work of BSCGT appears to still be in the theoretical stage, but is very exciting nonetheless. Check out their paper for more details. Second is work of Setty, McPherson, Blumberg, and Walfish, from NDSS earlier this year (see their project page here ). They implemented an argument system originally due Ishai, Kushilevitz, and Ostrovsky, and bring the runtime of the cloud down by a factor of 10^20 relative to a naive implementation (yes, I said 10^20; this again highlights the considerable engineering work that needs to be done on top of the theory to make proof or argument systems useful). Our protocols have several advantages not shared by SMBW (like security against computationally unbounded adversaries, and the ability to save the verifier time even when outsourcing a single computation), but this is another big step toward a practical implementation for verified computation. It looks like related work by Setty, Vu, Panpalia, Braun, Blumberg, and Walfish will be presented at USENIX Security 2012 as well (see the conference page here ). The golden age for negative applications of interactive proofs and PCPs (such as hardness of approximation results) arrived over 15 years ago, and continues to this day. Perhaps the time for positive applications is now.
{"url":"http://mybiasedcoin.blogspot.co.il/2012_06_01_archive.html","timestamp":"2014-04-21T09:35:14Z","content_type":null,"content_length":"100166","record_id":"<urn:uuid:fc0a74b8-b2f0-4571-a73f-5df2cbdab3cb>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
Palos Verdes Peninsula Algebra 1 Tutor ...I have taught Algebra in school and tutored it for twenty years. Whether you are struggling or sailing, I guarantee that your test scores will go up, or I’ll teach it again for free. Are you feeling lost in class? 24 Subjects: including algebra 1, Spanish, calculus, writing ...I have worked primarily with middle school students for the past four years. I used to be a translator for students who just immigrated from other countries. I would help them in the classroom academically, but also socially outside of the classroom. 29 Subjects: including algebra 1, English, reading, GED ...I am pretty flexible with time and so will allow a 3 hour cancellation policy. I will also, under certain circumstances, offer the chance for a makeup class or to have the lesson through email. I do operate from home, but I offer a quiet space for the lesson to take place without distractions. 5 Subjects: including algebra 1, reading, grammar, vocabulary ...I'm looking to attend graduate school in the near future in experimental condensed matter physics. I love teaching math and physics and am perfectly happy to deviate from the standard material to introduce the sort of things that made me love math and science in high school. My particular empha... 26 Subjects: including algebra 1, physics, calculus, geometry ...I enjoy working with different age groups and getting to know my students so that I can better help them in the areas where they may need improvement. My goal as a tutor is to help students to master the material by developing a set of tools that will help them for a very long time. In that regard, I told myself to a high standard and ask that my clients do as well. 22 Subjects: including algebra 1, reading, biology, algebra 2
{"url":"http://www.purplemath.com/Palos_Verdes_Peninsula_algebra_1_tutors.php","timestamp":"2014-04-19T02:49:24Z","content_type":null,"content_length":"24679","record_id":"<urn:uuid:4f3c4c42-2798-42bd-8be1-4ac2a6d76e45>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Search our database of handpicked sites Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest. You searched for "quantum mechanics" We found 16 results on physics.org and 147 results in our database of sites (of which 141 are Websites, 6 are Videos, and 0 are Experiments) Search results on physics.org Amazing demonstration of quantum levitation by Tel Aviv University's quantum levitation group. Basic mechanics descriptions and some basic experimental simulations but with good links to more advanced sites. A to Z of quantum physics terms with short descriptions and links to further info for each entry. An explanation of the quantum encryption and how it uses quantum entanglement to make it work. This is the introduction page to various sections on quantum topics. News from New Scientist on the topic of quantum. Good clear article from the National Geographic on quantum teleportation, including its use in cryptography. The quantum physics section of Hyperphysics, covers lots of information. This is excellent for linking areas and learning about how they interact. A PhD Comics animation explaining the the principle behind quantum computers that relies on superposition and entanglement. A complex mathematical description of the essential non-classical nature of quantum entanglement
{"url":"http://www.physics.org/explore-results-all.asp?currentpage=5&age=0&knowledge=0&q=quantum+mechanics","timestamp":"2014-04-16T08:34:00Z","content_type":null,"content_length":"53762","record_id":"<urn:uuid:34ffbe98-ae78-468b-b7b5-ec6c9683cd43>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
What are the essential properties of algebraic closure on an arbitrary structure? up vote 6 down vote favorite Define the "model theoretic" notion of a closure function as follows: Definition (1): Let $D$ be a non-empty set. A function $cl:P(D)\longrightarrow P(D)$ called a closure function iff it has the following properties: $(1)~\forall A\subseteq D~~~~A\subseteq cl(A)$ $(2)~\forall A,B\subseteq D~~~~A\subseteq B\Longrightarrow cl(A)\subseteq cl(B)$ We say that $cl$ is a "good" closure on $D$ if it has the following property too: $(4)~\forall A\subseteq D~~~~cl(A)=\bigcup_{B\in P_{<\omega}(A)}cl(B)$ and a "pregeometry" if we have: $(5)~\forall A\subseteq D~~~~\forall a,b\in D~~~~~~a\in cl(A\cup \lbrace b\rbrace)\setminus cl(A)\Longrightarrow b\in cl(A\cup \lbrace a\rbrace)$ There are many trivial good closures on a given set which are not related to any structure, but even there are some non-trivial natural good closures on the domain of an arbitrary $\mathcal{L} $-structure $\mathcal{M}$ like well known "algebraic closure" ($acl_{\mathcal{M}}$), "definable closure" ($dcl_{\mathcal{M}}$) and "structural closure" ($scl_{\mathcal{M}}$) which is defined as $\forall A\subseteq Dom(\mathcal{M})~~~~~scl_{\mathcal{M}}(A):=Dom(\langle A\rangle_{\mathcal{M}})$ and $\langle A\rangle_{\mathcal{M}}$ is the substructure of $\mathcal{M}$ generated by the set $A$. Now the main question is about the "essential" properties of these closure functions on an arbitrary structure. In the other words is "goodness" the unique essential property of $acl_{\mathcal{M}}$, $dcl_{\mathcal{M}}$ and $scl_{\mathcal{M}}$ on an arbitrary $\mathcal{L}$ - structure $\mathcal{M}$? Precisely: Question (1): Let $D$ be an arbitrary non-empty set, and $cl:P(D)\longrightarrow P(D)$ be a good closure function on $D$, is there a first order language $\mathcal{L}$ and an $\mathcal{L}$-structure $\mathcal{M}$ such that: $Dom(\mathcal{M})=D$ and $scl_{\mathcal{M}}=cl$? Question (2): What is the answer of question (1) for $acl_{\mathcal{M}}$ and $dcl_{\mathcal{M}}$? Remark (1): Note that producing a negative answer for the above questions needs finding a property $P$ different from "being a good closure" and proving that the functions $acl$, $dcl$ and $scl$ have the property $P$ on "any" structure and so there is no such language and structure for an arbitrary function $cl:P(D)\longrightarrow P(D)$ because it is possible that such function have the property $\neg P$.So one can re ask the above questions by the following re stating: "Let $D$ be an arbitrary non-empty set, and $cl:P(D)\longrightarrow P(D)$ be a good closure function with property $P$ on $D$, is there a first order language $\mathcal{L}$ and an $\mathcal{L} $-structure $\mathcal{M}$ such that: $Dom(\mathcal{M})=D$ and $scl_{\mathcal{M}}=cl$ ($acl_{\mathcal{M}}=cl$ or $dcl_{\mathcal{M}}=cl$)? So we can go further and try to characterize "all" essential properties of $scl, acl, dcl$ on an arbitrary structure and ask the following question: Question (3): What is the property $P$ such that for any non-empty set $D$ and any function $cl:P(D)\longrightarrow P(D)$ which is a good closure function with property $P$, the answer of the question (1) or (2) be true? In the other direction It is well known that $acl_{\mathcal{M}}$ is a pregeometry on "strongly minimal" structures. So: Question (4): Is there a known type of $\mathcal{L}$-structures which the functions $scl$ or $dcl$ be a pregeometry on them? model-theory combinatorial-geometry universal-algebra add comment 1 Answer active oldest votes $\textbf{Question 1}$ The answer to this question is affirmative. The closure operators that you call good closure operators are normally called algebraic closure operators. Furthermore, each algebraic closure operator is of the from $\mathrm{scl}_{\mathcal{A}}$ for some algebra $\mathcal{A}$. A proof of this fact is given here on Theorem 3.5 in Chapter 2. Suppose that $C$ is an algebraic closure operator on some set $A$. up vote 4 down vote accepted Let $\mathcal{F}_{n}$ be the set of all $n$-ary functions $f:A^{n}\rightarrow A$ such that $f(a_{1},...,a_{n})\in C(a_{1},...,a_{n})$ whenever $a_{1},...,a_{n}\in A$. Let $\mathcal{A} =(A,\bigcup_{n}\mathcal{F}_{n})$. Then it is easy to see that $C=\mathrm{scl}_{\mathcal{A}}$. Good answer. Have you any idea about other questions? – Ali Sadegh Daghighi Sep 21 '13 at 4:28 add comment Not the answer you're looking for? Browse other questions tagged model-theory combinatorial-geometry universal-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/142511/what-are-the-essential-properties-of-algebraic-closure-on-an-arbitrary-structure","timestamp":"2014-04-18T03:23:23Z","content_type":null,"content_length":"56642","record_id":"<urn:uuid:741b2c79-e3ea-4ce8-a1b3-ef403f2164f4>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
How to calculate the pair correlation function Particle tracking using IDL -- John C. Crocker and Eric R. Weeks Home | Download software | Tutorial | Extra software How to calculate the pair correlation function g(r) This explanation is for three-dimensional data. To calculate g(r), do the following: • Pick a value of dr • Loop over all values of r that you care about: 1. Consider each particle you have in turn. Count all particles that are a distance between r and r + dr away from the particle you're considering. You can think of this as all particles in a spherical shell surrounding the reference particle. The shell has a thickness dr. 2. Divide your total count by N, the number of reference particles you considered -- probably the total number of particles in your data. 3. Divide this number by 4 pi r^2 dr, the volume of the spherical shell (the surface area 4 pi r^2, multiplied by the small thickness dr). This accounts for the fact that as r gets larger, for trivial reasons you find more particles with the given separation. 4. Divide this by the particle number density. This ensures that g(r)=1 for data with no structure. In other words, if you just had an arbitrarily placed spherical shell of inner radius r and outer radius r+dr, you'd expect to find about rho * V particles inside, where rho is the number density and V is the volume of that shell. • In 2D, follow the algorithm as above but divide by 2 pi r dr instead of step #3 above. One caveat: For experimental data, you often have edges to your sample that are artificial. For example, you take a picture of particles but at the edges of your picture, the system extends further outwards. Thus when calculating g(r) based on reference particles near the edge of your image, you have a problem. You'll have to modify step #3 above with the correct volume/area that actually lies within the image you're looking at. The routines I've written take care of that. I wrote IDL routines to calculate g(r) in 2D and 3D. These do some special tricks to deal with experimental data, in other words, to cleverly deal with the edge effects. 2D program: For a particle near the edge of a rectangular image, when I'm counting particles a distance r away from it, for many values of r the circle of radius r extends outside of the image. I did some math to figure out how to determine how much angular extent of the circle lies within the image for these cases (in the subroutine checkquadrant). Thus the edges are correctly accounted for. 3D program: For a particle near the edge of a rectangular image box, when I'm counting particles a distance r away from it, we have the same problem described for 2D data. However, calculating the resulting solid angle of the sphere contained within the box is more than I could handle mathematically, for arbitrary box dimensions and arbitrary locations within the box. So I did a trick. My image boxes tend to be short and wide, that is, very narrow in Z and large in X and Y. My program does the following. I'm calculating g(r) for r < rmax, with a default rmax = 10. I consider only reference particles that are more than rmax away from the horizontal edges of the box, so I never have to worry about overlaps in X and Y. The resulting formula, to worry about overlaps in Z alone, turns out to be quite reasonable. (That is, I calculate the surface area of a spherical hemisphere of radius R that is cut off at a finite height H < R. It turns out this formula is simple: 2 pi R H.) So, an important warning: Don't choose rmax to be more than half of the width of your data, in X and Y. This restriction is only for the 3D program. See here for additional comments on my "special tricks". 1. How to calculate g(r) (you are here) 2. Extra g(r) routines -- unsupported Contact us • This page was written by Eric Weeks: weeks(at)physics.emory.edu
{"url":"http://www.physics.emory.edu/faculty/weeks/idl/gofr2.html","timestamp":"2014-04-18T10:41:50Z","content_type":null,"content_length":"5290","record_id":"<urn:uuid:313ff44a-994c-4364-9ced-1809f9cbec71>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof verification March 2nd 2011, 04:15 PM Proof verification I was recently presented with this problem: $\mbox{Let ABCD be a parallelogram, where the diagonal BD is equal to the side AB. Show that}$ $\displaystyle{\frac{\mid{AD}\mid}{\mid{AC}\mid} < \frac{2}{3}{$ $\mbox{where}\mid{AD}\mid\mbox{and}\mid{AC}\mid\mbo x{are the lengths of the sides.}$ I solved this problem by first drawing up the parallelogram, like this: Attachment 21025 Then I added a few extra lines, like this: Attachment 21026 adding the height h and a point P. After that I set a condition that $h>0$ and stated that $\mid{BD}\mid=\mid{AB}\mid=\mid{DC}\mid$ and also the fact that $\mid{DP}\mid=\frac{\mid{AD}\mid}{2}$, which I'm too lazy to prove right now, as I'm still struggling with latex. After this I used the pythagorean theorem to show that ${\mid{AC}\mid}^2=(1.5{\mid{AD}\mid})^2+h^2$ which means that $\mid{AC}\mid=\sqrt{(1.5{\mid{AD}\mid})^2+h^2}$ As an effect, this means that $\mid{AC}\mid>\sqrt{(1.5{\mid{AD}\mid})^2}$, as $\sqrt{(1.5{\mid{AD}\mid})^2+h^2}$ clearly is bigger than $\sqrt{(1.5{\mid{AD}\mid})^2}$ Then, I just made myself an inequality. $\displaystyle{\frac{\mid{AC}\mid}{1.5\mid{AD}\mid} >1}$ $\displaystyle{\frac{1}{1.5\mid{AD}\mid}>\frac{1}{\ mid{AC}\mid}}$ $\displaystyle{\frac{\mid{AD}\mid}{1.5\mid{AD}\mid} >\frac{\mid{AD}\mid}{\mid{AC}\mid}}$ And then finally $\displaystyle{\frac{\mid{AD}\mid}{\mid{AC}\mid}<\f rac{2}{3}}$, which I was supposed to show. My question is if I've done everything right, as the book in which I found the problem solves it with trigonometry, and the section for the problem was a section about trigonometry. If my solution is incorrect I need to know why. March 7th 2011, 07:24 AM It's correct. :) but there is a geometrical proof i got. Drop perpendicular from B to AD and call the foot of the perpendicular as K. Let AC meet BD at L. then |AK|=|KD| and |AL|=|LC|. Let AL and BK meet at G. G is the centroid since AL and BK are medians. To prove: |AD|/|AC| < 2/3 that is, |AK|/|AL| < 2/3. Now |AK|<|AG| since AG is the hypotenuse of triangle-AGK.(AGK is a right angled triangle). so |AK|/|AL| < |AG|/|AL|. we know that |AG|/|AL|=2/3 from elementary geometry because G is the centroid. which proves the result.
{"url":"http://mathhelpforum.com/geometry/173223-proof-verification-print.html","timestamp":"2014-04-20T07:38:54Z","content_type":null,"content_length":"9415","record_id":"<urn:uuid:677cd02c-3e60-479d-bec2-7a6371317a5a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Dynamic behaviors of the Ricker population model under a set of randomized perturbations. (English) Zbl 0952.92025 Summary: We studied the dynamics of the Ricker population model under perturbations by the discrete random variable $\epsilon$ which follows distribution $P\left\{\epsilon ={a}_{i}\right\}={p}_{i}$, $i=1,\cdots ,n$, $0<{a}_{i}\ll 1$, $n\ge 1$. Under the perturbations, $n+1$ blurred orbits appeared in the bifurcation diagram. Each of the $n+1$ blurred orbits consisted of $n$ sub-orbits. The asymptotes of the $n$ sub-orbits in one of the $n+1$ blurred orbits were ${N}_{t}={a}_{i}$ for $i=1,\cdots ,n$. For other $n$ blurred orbits, the asymptotes of the $n$ sub-orbits were ${N}_{t}={a}_ {i}exp\left[r\left(1-{a}_{i}\right)\right]+{a}_{j}$, $j=1,2,\cdots ,n$, for $i=1,\cdots ,n$, respectively. The effects of variances of the random variable $\epsilon$ on the bifurcation diagrams were examined. As the variance value increased, the bifurcation diagram became more blurred. Perturbation effects of the approximate continuous uniform random variable and random error were compared. The effects of the two perturbations on dynamics of the Ricker model were similar, but with differences. Under different perturbations, the attracting equilibrium points and two-cycle periods in the Ricker model were relatively stable. However, some dynamic properties, such as the periodic windows and the $n$-cycle periods $\left(n>4\right)$, could not be observed even when the variance of a perturbation variable was very small. The process of reversal of the period-doubling, an important feature of the Ricker and other population models observed under constant perturbations, was relatively unstable under random perturbations. 92D25 Population dynamics (general) 37N25 Dynamical systems in biology 37H99 Random dynamical systems
{"url":"http://zbmath.org/?q=an:0952.92025","timestamp":"2014-04-17T15:38:53Z","content_type":null,"content_length":"24760","record_id":"<urn:uuid:45d504e4-5bd7-4cac-9e60-6045d7a485c5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: pearson (correction) [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: RE: pearson (correction) From "Feiveson, Alan H. \(JSC-SK311\)" <alan.h.feiveson@nasa.gov> To <statalist@hsphsun2.harvard.edu> Subject st: RE: pearson (correction) Date Fri, 3 Nov 2006 08:21:14 -0600 I'm sorry - I think I included the wrong program in my last message - that one only works for Type IV. Here is another version that I think works in general. I think the third argument `x0' is supposed to be an offset. You can probably get around this by something like gen x0=0 pearr y x x0 Al F. program define pearr version 7.0 syntax varlist [aweight] [if] [in] args y x x0 qui summ `y' [`weight' `exp'] `if' `in' ,det scalar mu1=r(mean) scalar mu2=r(Var) scalar skew=r(skewness) scalar kurt=r(kurtosis) scalar sig=r(sd) local s "scalar" `s' mu3=`s'(skew)*`s'(sig)^3 `s' mu4=`s'(kurt)*`s'(mu2)^2 `s' be1=`s'(skew)^2 `s' be2=`s'(kurt) scalar mu1=-.190783898 scalar mu2=3.238424951 scalar be1=0.829135838 scalar be2=4.862944362 scalar sig=sqrt(scalar(mu2)) local be1 "scalar(be1)" local be2 "scalar(be2)" local mu2 "scalar(mu2)" local mu3 "scalar(mu3)" local mu4 "scalar(mu4)" local sig "scalar(sig)" local skew "scalar(skew)" `s' top = `be1'*(`be2'+3)^2 `s' bot = 4*(2*`be2'-3*`be1'-6)*(4*`be2'-3*`be1') `s' kap = `s'(top)/`s'(bot) di "kappa = ",`s'(kap) scalar h1=`s'(mu1)-`x0' local h1 "scalar(h1)" scalar S=4*`h1'*`mu2'/`mu3' local S "scalar(S)" scalar r = 2+`S' local r "scalar(r)" scalar asq=(`r'-1)*`mu2'-`h1'*`h1' scalar a=sqrt(scalar(asq)) local a "scalar(a)" scalar v=-`r'*`h1'/`a' scalar m=(`r'+2)/2 local v "scalar(v)" local a "scalar(a)" local m "scalar(m)" scalar L=scalar(r)*scalar(r)+scalar(v)*scalar(v) local L "scalar(L)" scalar top4=3*(scalar(a)^4)*`L'*((`r'+6)*`L'-8*`v'*`v') scalar bot4=(`r'^4)*(`r'-1)*(`r'-2)*(`r'-3) scalar rhs4=scalar(top4)/scalar(bot4) di "mu4, rhs ",scalar(mu4)," ",scalar(rhs4) scalar top3=-4*`L'*`v'*scalar(a)^3 scalar bot3=(`r'^3)*(`r'-1)*(`r'-2) scalar rhs3=scalar(top3)/scalar(bot3) di "mu3, rhs ",scalar(mu3)," ",scalar(rhs3) scalar rhs2=`L'*scalar(a)*scalar(a)/(scalar(r)*scalar(r)*(scalar(r)-1)) di "mu2, rhs ",scalar(mu2)," ",scalar(rhs2) scalar rhs1=-scalar(a)*`v'/`r' scalar lhs1=scalar(mu1)-`x0' di "mu1-x0, rhs ",scalar(lhs1)," ",scalar(rhs1) if `s'(kap) > 0 { /* Type IV */ tempvar X gen `X'=`x'-`x0' qui cap gen zx=. `if' `in' qui replace zx=`m'*log(1+(`X'/`a')^2)+`v'*atan(`X'/`a') `if' `in' qui cap gen f0=. `if' `in' qui replace f0=exp(-zx) `if' `in' qui integ f0 `x' `if' `in' qui cap gen fh_mom=. `if' `in' qui replace fh_mom=f0/r(integral) `if' `in' scalar r = 6*(`be2'-`be1'-1)/(2*`be2'-3*`be1'-6) scalar A0=12*`be2'+12*`be1'+36 scalar A1=10*`be2'+30+12*`be1' scalar A2=2*`be2'-6+3*`be1' scalar D=A1*A1-4*A0*A2 scalar r1=(A1-sqrt(D))/(2*A2) scalar r2=(A1+sqrt(D))/(2*A2) di "r1, r2 ",scalar(r1)," ",scalar(r2) scalar r = r1 local r "scalar(r)" scalar bot=16*(`r'-1)-`be1'*(`r'-2)*(`r'-2) local bot "scalar(bot)" scalar v=`r'*(`r'-2)*sqrt(`be1')/sqrt(`bot') scalar a = sqrt(`mu2'*`bot'/16) -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Feiveson, Alan H. (JSC-SK311) Sent: Friday, November 03, 2006 8:08 AM To: statalist@hsphsun2.harvard.edu Subject: st: RE: RE: Creating a distribution from moments Reza - If you really want to fit a Pearson distribution to your moments, I have an old Stata program to do it, but I don't remember exactly what the output is. Basically if y is your data and x is a x-axis plotting variable (can be the same as y or cover the same range as y butt say, equally spaced) then typing pearson y x produces a Pearson density fy0. If you plot fy0 vs x and superimpose a histomgram of y you should get a good approximation. The parameters that produce fy0 are buried in a list of scalars produced by the program. Maybe from a description of the Pearson fitting process you can figure out what they are and what "Type" your distribution is. It's been so long since I wrote this, I don't remember. Al Feiveson = ================================ program define pearson version 7.0 syntax varlist [aweight] [if] [in] args y x qui summ `y' [`weight' `exp'] `if' `in' ,det scalar mu1=r(mean) scalar mu2=r(Var) scalar skew=r(skewness) scalar kurt=r(kurtosis) scalar sig=r(sd) local s "scalar" `s' mu3=`s'(skew)*`s'(sig)^3 `s' mu4=`s'(kurt)*`s'(mu2)^2 `s' be1=`s'(skew)^2 `s' be2=`s'(kurt) local be1 "scalar(be1)" local be2 "scalar(be2)" local mu2 "scalar(mu2)" local sig "scalar(sig)" local skew "scalar(skew)" /* calculate Pearson diff-eq parameters for mean=0 */ tempvar xp gen `xp'=`x'-scalar(mu1) scalar Ap=10*`be2'-18-12*`be1' scalar b0 = -(4*`be2'-3*`be1')*`mu2'/scalar(Ap) scalar b1=-(`be2'+3)*`sig'*`skew'/scalar(Ap) scalar b2=-(2*`be2'-3*`be1'-6)/scalar(Ap) scalar a=scalar(b1) local a "scalar(a)" local b0 "scalar(b0)" local b1 "scalar(b1)" local b2 "scalar(b2)" scalar B0=`b0'+`a'*`b1'+`a'*`a'*`b2' scalar B1=`b1'+2*`a'*`b2' scalar B2=`b2' scalar D=B1*B1-4*B0*B2 di "D = ",`s'(D) if D < 0 { /* complex roots - Type IV */ scalar del=sqrt(-D/(4*B2*B2)) scalar ga=B1/(2*B2) tempvar X gen `X'=`xp'-`s'(`a')+`s'(ga) cap gen zx1=. replace zx1=log(`X'*`X'+`s'(del)*`s'(del))/(2*B2) cap gen zx2=. replace zx2=`s'(ga)*atan(`X'/`s'(del))/(B2*`s'(del)) cap gen f0=. replace f0=exp(zx1-zx2) integ f0 `xp' replace f0=f0/r(integral) `s' top = `be1'*(`be2'+3)^2 `s' bot = 4*(2*`be2'-3*`be1'-6)*(4*`be2'-3*`be1') `s' kap = `s'(top)/`s'(bot) di "kappa = ",`s'(kap) -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Nick Cox Sent: Friday, November 03, 2006 6:09 AM To: statalist@hsphsun2.harvard.edu Subject: st: RE: Creating a distribution from moments This is called the method of moments and Karl Pearson thought, more or less, that it was the best way to fit a distribution, so long as you know what kind of distribution you are fitting. But it has never really recovered from criticisms made by Ronald A. Fisher and others several decades ago, except that we still use it routinely in problems like plugging the mean and the standard deviation into a Gaussian (so-called normal) formula. (That is usually very close to the maximum likelihood solution, but most summary programs, like -summarize-, use n - 1 not n as divisor for the variance estimator.) Yes, you can have a go at this, but you need to look at books like the multivolume reference by N.L. Johnson, S. Kotz and friends to see the ways to do it. Usually there is some indirectness: for example, I can't recall any named distribution for which one of the parameters _is_ the skewness or the kurtosis: rather if you have k parameters, you typically end up with k simultaneous equations in those parameters and the first k moments will be useful in solving those. (Strictly, there is a terminology problem here: the skewness and kurtosis, at least as named in Stata, are ratios derived from moments, not moments themselves, but we know what you mean.) More important, this is only the method of choice if these summaries are _all_ you have to go on. If you have the data, use all the data. Reza C Daniels > I have summary statistics for four moments of the density of y: > mean=877; std. deviation=611; skewness=0.658; kurtosis=2.278. Is it > possible for me to use this information to generate a hypothetical > density whose four moments approximate these values? > The analogy here would be creating, for example, a Gaussian > distribution > by: > set obs 1000 > gen z1=invnorm(uniform()) > My question relates to when one does not want a standard density but > possibly some parameterization of a Beta or Gamma that fits the four > moments. > I have searched through the probability distributions and density > functions in Stata (I'm using version 8.2SE), but it is not > immediately obvious to me that I can do this. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2006-11/msg00081.html","timestamp":"2014-04-19T06:52:42Z","content_type":null,"content_length":"15212","record_id":"<urn:uuid:c2aaca3e-034c-4f56-a2bb-b4552e2b162b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
How to play Kakuro, learn from the Kakuro masters! Kakuro Information / How to play Kakuro puzzles resemble crosswords which use numbers instead of words. The aim of the game is to fill all the blank squares in the grid with only the numbers 1-9 so that the numbers you enter add up to the corresponding clues. When the grid is filled, the puzzle is complete. Kakuro puzzle grids can be any size, though usually the squares within them have to be arranged symmetrically. As a rule of thumb, the more blank squares a puzzle contains, the harder it is, however this isn't always true, especially if it is a good quality puzzle. NOTE: It is very important to note that a proper Kakuro puzzle has only 1 unique solution, and it will always have a logical way of reaching it, there should be no guesswork needed. Some websites advertise Kakuro puzzles which are broken and inferior, don't accept second best, Kakuro.com will always have quality puzzles. Clue Squares Kakuro puzzles will contain many clue squares, these are squares which help you to solve the puzzle. A clue square can have an "across" clue or a "down" clue, or both. In the example below we see an "across" clue square, with 4 blank squares to the right of it. The 4 blank squares make up a "run", you must fill the run so that all the numbers in it add up to the clue (in this case 13). So you could enter 1,2,3 and 7. An "across" clue, 13 over 4 squares The same is true for "down" clues, however the squares which form the run are positioned below the clue square in that case. Duplicate numbers You may not enter any duplicate number in the same run, so the example on the left is incorrect as it has two 1's. Therefore the combination 1+1+7 cannot be used here to add up to the clue of 9. So you must fill in every run, using only the numbers 1-9 without duplicates, so that they add up to the clue square given. Doesn't Kakuro involve a lot of maths? Initially, Kakuro may seem like it's a maths puzzle, but really it's more of a logic puzzle (with a bit of maths). You can always logically determine exactly which numbers to place in the squares with no guesswork, the key lies in knowing the number combinations which make up the clues. For example, a clue number of 10 over a run of 3 squares has the following combinations: However, a clue of 7 over 3 squares has only 1 combination: (you can't have 1 + 3 + 3, as this would duplicate the number 3) The fewer combinations a clue has, the easier it is to solve, so you should look for clues you know only have 1 or 2 combinations first. Click here for a printable list of all clues/runs which have only 1 combination. The more you play Kakuro, the more you learn the number combinations, and the less maths you need to use. The Kakuro.com combination helper tool A small program which shows you all the combinations for any given clue over any number of squares. This tool is completely free to all our visitors, you can download and use it while solving any Kakuro puzzle. Click here to download the free Kakuro combination tool (250k) Note: If you play puzzles using Kakuro Master, it will automatically show you the correct combinations whenever you place your mouse over a clue square. Where to go from here? Now that you have learnt the basics of Kakuro, try the following:
{"url":"http://www.kakuro.com/howtoplay.php","timestamp":"2014-04-16T07:32:04Z","content_type":null,"content_length":"17555","record_id":"<urn:uuid:35cb7a81-22f6-4064-b175-c45fe9f4a1dd>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
Area and Perimeter |Units of Area| Formula of Area and Perimeter of Plane Figure Area and Perimeter Here we will learn about how to find area and perimeter of a plane figures. The perimeter is used to measure the boundaries and the area is used to measure the regions enclosed. The length of the boundary of a closed figure is called the perimeter of the plane figure. The units of perimeter are same as that of length, i.e., m, cm, mm, etc. A part of the plane enclosed by a simple closed figure is called a plane region and the measurement of plane region enclosed is called its area. Area is measured in square units. The units of area and the relation between them is given below: The different geometrical shapes formula of area and perimeter with examples are discussed below: Perimeter and Area of Rectangle: ● The perimeter of rectangle = 2(l + b). ● Area of rectangle = l × b; (l and b are the length and breadth of rectangle) ● Diagonal of rectangle = √(l^2 + b^2) Perimeter and Area of the Square: ● Perimeter of square = 4 × S. ● Area of square = S × S. ● Diagonal of square = S√2; (S is the side of square) Perimeter and Area of the Triangle: ● Perimeter of triangle = (a + b + c); (a, b, c are 3 sides of a triangle) ● Area of triangle = √(s(s - a) (s - b) (s - c)); (s is the semi-perimeter of triangle) ● S = 1/2 (a + b + c) ● Area of triangle = 1/2 × b × h; (b base , h height) ● Area of an equilateral triangle = (a^2√3)/4; (a is the side of triangle) Perimeter and Area of the Parallelogram: ● Perimeter of parallelogram = 2 (sum of adjacent sides) ● Area of parallelogram = base × height Perimeter and Area of the Rhombus: ● Area of rhombus = base × height ● Area of rhombus = 1/2 × length of one diagonal × length of other diagonal ● Perimeter of rhombus = 4 × side Perimeter and Area of the Trapezium: ● Area of trapezium = 1/2 (sum of parallel sides) × (perpendicular distance between them) = 1/2 (p[1] + p[2]) × h (p[1], p[2] are 2 parallel sides) Circumference and Area of Circle: ● Circumference of circle = 2πr = πd Where, π = 3.14 or π = 22/7 r is the radius of circle d is the diameter of circle ● Area of circle = πr^2 ● Area of ring = Area of outer circle - Area of inner circle. ● Mensuration ● Mensuration - Worksheets 7th Grade Math Problems 8th Grade Math Practice From Area and Perimeter to HOME PAGE
{"url":"http://www.math-only-math.com/area-and-perimeter.html","timestamp":"2014-04-17T03:48:21Z","content_type":null,"content_length":"17459","record_id":"<urn:uuid:ecb87c87-956e-483c-8009-4a870b4124ca>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
July 7th 2006, 01:02 AM #1 Junior Member Jun 2006 P(2ap,ap^2) is a point on the parabola x^2=4ay The normal at P cuts the x axis at S and the y axis at T. S= (ap(2+p^2),0) T= (0, a(2+p^2)) Find the value(s) of p such that P is the midpoint of ST Thanks Nath P(2ap,ap^2) is a point on the parabola x^2=4ay The normal at P cuts the x axis at S and the y axis at T. S= (ap(2+p^2),0) T= (0, a(2+p^2)) Find the value(s) of p such that P is the midpoint of ST Thanks Nath Is it $p^2 = 2?$ In other words, $p = \sqrt{2}$or $p=-\sqrt{2}$ Keep Smiling Hello, Nath! They already did the groundwork for us . . . the rest is easy. [I got the same answers, Malay!] $P(2ap,ap^2)$ is a point on the parabola $x^2 = 4ay$ The normal at $P$ cuts the x-axis at $S$ and the y-axis at $T.$ $S \,= \,\left(ap[2+p^2],\,0\right)\qquad T\,=\,\left(0,\,a[2+p^2]\right)$ Find the value(s) of $p$ such that $P$ is the midpoint of $ST.$ I assume you know the Midpoint Formula . . . Given two points $A(x_1,y_1)$ and $B(x_2,y_2)$, the midpoint of $\overline{AB}$ is: . $\left(\frac{x_1+x_2}{2}\,,\,\frac{y_1+y_2}{2} \right)$ We have: . $S\left(ap[2+p^2],\,0\right),\;\;T\left(0,\,a[2+p^2]\right)$ The midpoint is: . $M \:=\:\left(\frac{ap[2+p^2] + 0}{2}\,,\,\frac{0+a[2+p^2]}{2}\right) \:=$$\:\left(\frac{ap[2 + p^2]}{2}\,,\,\frac{a[2+p^2]}{2}\right)$ Since we want $M = P$, we have: . $\left(\frac{ap[2+p^2]}{2}\,,\,\frac{a[2+p^2]}{2}\right)\;=\;\left(2ap,\,ap^2\right)$ If two points are equal, their corresponding coordinates are equal. Equate $x$'s: . $\frac{ap(2+p^2)}{2} = 2ap\quad\Rightarrow\quad ap(2 + p^2) = 4ap$ . . Divide by $ap:\;\;2+p^2 \,=\,4\quad\Rightarrow\quad p^2 = 2\quad\Rightarrow\quad p = \pm\sqrt{2}$ Equate $y$'s: . $\frac{a(2+p^2)}{2} = ap^2\quad\Rightarrow\quad a(2+p^2) = 2ap^2$ . . Divide by $a:\;\;2 + p^2 \,=\,2p^2\quad\Rightarrow\quad p^2 = 2\quad\Rightarrow\quad p = \pm\sqrt{2}$ Therefore: . $\boxed{p \:= \:\pm\sqrt{2}}$ Thanks Guys Just Quickly Is a possible answer p = 0 Is a possible answer p = 0 No. It does not satisfy the conditions. Keep Smiling July 7th 2006, 05:43 AM #2 July 7th 2006, 08:17 AM #3 Super Member May 2006 Lexington, MA (USA) July 7th 2006, 03:56 PM #4 Junior Member Jun 2006 July 10th 2006, 03:39 AM #5 Junior Member Jun 2006 July 10th 2006, 03:43 AM #6
{"url":"http://mathhelpforum.com/pre-calculus/4027-parabola-normals.html","timestamp":"2014-04-16T09:11:35Z","content_type":null,"content_length":"49446","record_id":"<urn:uuid:c94fcb84-668d-49f0-b312-46fc21bf1769>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Bergenfield Math Tutor ...I have a bachelor's degree in physics. I have tutored high school geometry both privately and for the Princeton Review. I have a bachelor's degree in physics. 20 Subjects: including ACT Math, algebra 1, algebra 2, SAT math ...I have also coordinated and taught SAT Math classes. I have experience working with students ranging from 4th-12th grades, including students diagnosed with ADHD. Simply put, my style of teaching is personalized so that I can help each individual student reach his/her personal goals in the coursework. 24 Subjects: including algebra 1, ACT Math, geometry, SAT math ...I received by Bachelors in Math from Montclair State University. Now I an getting my Masters in Education from Montclair State University. I have tutored students before in Mathematics and success in math is all about practice. 12 Subjects: including calculus, vocabulary, statistics, SAT math ...My education includes a bachelor's in mathematics and physics, as well as a master's in pure and applied mathematics. I am currently a college professor with a full time job at a community college. My style of teaching is to treat the student as an equal. 7 Subjects: including discrete math, algebra 1, algebra 2, calculus ...One of my electives was tutoring for the Advancement Via Individual Determination (AVID) program that my high school offered. I am currently tutoring students who go to some of the top prep schools in NYC, including Trinity and Collegiate. I enjoy working with students and helping them reach their academic potential. 16 Subjects: including SAT math, algebra 1, algebra 2, biology
{"url":"http://www.purplemath.com/bergenfield_math_tutors.php","timestamp":"2014-04-19T20:05:17Z","content_type":null,"content_length":"23570","record_id":"<urn:uuid:9ff83ee1-b041-43a9-913b-80917372c7d2>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment September 2008 The symbols 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, are so commonplace that we rarely appreciate just how special our system of numerals really is. Fifteen hundred years of development have given us an extremely succinct method for writing down even very large numbers. Our numerals have their origin in a system developed by the Hindu scholars of India in the middle of the first millennium AD. Their system was in turn adopted by the Arabs, who ultimately transmitted it to Europe in the twelfth century. For this reason, these numerals tend to be referred to as Hindu-Arabic numerals. The key to the success of this system is its positional nature. We have a total of ten symbols at our disposal, but we are certainly not limited to writing down ten different values. One of the first things we all learn at school is that our numbers are arranged in columns. Reading from right to left, we first have the units column, then the tens, the hundreds, the thousands and so on. It not only matters which symbols we write down, but also where we place them in this arrangement. This is what we mean by a positional number system. But why should a positional system arise in the first The beginnings of numeration The earliest number systems grew out of the human desire to count. The most primitive and basic of all such systems is that of tally marks, in which we place one mark on the page for every item counted. But even if we count up to a moderately sized number, we end up with a sprawling collection of tally marks on the page which are not very easy on the eyes. We therefore introduce extra steps, like letting a diagonal line through four vertical marks stand for five — this amounts to introducing a brand new numeral to our system. We can now write down numbers that are much easier to take in at a glance. But as we write down larger and larger numbers, we find that the notation once again becomes unwieldy. One solution to this problem is to introduce more new symbols. This is precisely how number systems developed historically. As an example of a system with various different symbols, consider the ancient Babylonian system. The first four symbols are give in figure 1: Figure 1: Babylonian numerals This system was first developed by the Sumerians around 3500 BC, but it is usually associated with the Babylonians. One hundred is now represented by a single symbol, a vast improvement on our initial tally system. Even large numbers, whose tally form would be horrendous, can now be written down more succinctly. For example, 3,964 is written as follows: Figure 2: The number 3,964 written in Babylonian numerals. Notice that even more space is saved by arranging these symbols in columns rather than in a row: One important thing to notice about these numerals is that the symbol for one always means one, the symbol for a hundred always means a hundred, etc. It doesn't actually matter in which order we write the symbols. The number also represents the number 3,964; the order of the symbols is irrelevant. The Babylonians tended to use a sensible arrangement: they grouped like symbols together and arranged the groups in ascending order, reading from right to left. In contrast to the Hindu-Arabic numerals, this Babylonian system is non-positional. Figures 3 and 4 give further examples of non-positional number systems. Figure 3: Egyptian numerals, ca 3000 BC. Figure 4: Minoan (Linear B) numerals, ca 1500 BC. The number systems given in figures 3 and 4 seem a little easier on the eye than the Babylonian system. The reason for this is perhaps the fact that they use only one sign for each of 100 and 1000, unlike the compound Babylonian symbols. Despite this, we continue to concentrate on the Babylonian system, because the Babylonians went on to introduce an innovation which never occurred to the Egyptians or the Minoans (as far as we know...!). Get into position Clocks remind us of the Roman non-positional number system and the Babylonian sexagesimal system. The Babylonians presumably recognised the shortcomings of their non-positional number system. Using the symbols given in Figure 1, they could write down any number up to and including 9,999, but no higher number. If they wanted to write down increasingly large numbers, they would be forced to introduce more and more new symbols. This would have presented them with two problems: increasingly unwieldy notation and the need to remember a large number of symbols. The Babylonians needed a different approach to writing down large numbers. And the new approach was, of course, a positional The first step was to scrap all but the symbols for one and ten. Then, like us, they arranged combinations of these symbols in columns, reading from right to left. However, unlike ours, the Babylonian number system was not decimal, but sexagesimal: successive columns represented the powers of 60, rather than 10. We have inherited this system both for time-keeping and angle-measuring: e.g. 60 seconds in a minute. In the following example we will draw borders around the columns for clarity, although the Babylonians themselves did not do this. The lowest-value column is the right-most column, which could contain any number up to and including 59 — this is the sexagesimal version of a units column. Any number less than or equal to 59 will appear in this system as it did in the earlier non-positional system, for example 42 is To go beyond 59, we must add another column to the left, which represents multiples of 60. For example represents (1 × 60) + (0 × 1) = 60, (the symbol in the right-hand column indicates an empty column — more on this shortly). The string of symbols represents (11 × 60) + (24 × 1) = 684. With one column our limit is 59; with two we can write down numbers as high as (59 × 60) + (59 × 1) = 3,599. To go higher, we only need to add further columns to the left, representing multiples of higher powers of 60. If we add a third column, then the value entered there represents a multiple of 3,600 = 60 × 60, and a fourth column would represent multiples of 216,000 = 60^3. With a three-column arrangement we can write down numbers as high as (59 × 3,600) + (59 × 60) + (59 × 1) = 215,999. In this new scheme our favourite number 3,964 becomes: because 3,964 = (1 × 3,600) + (6 × 60) + (4 × 1). It is easy to see that we can extend this system indefinitely simply by adding more columns on the left. There is no longer a need for new symbols. A little something should be said about the "-" used to indicate an empty column in the representation of 60. First of all, the Babylonians had no such symbol. They either left an empty space to indicate an empty column, or they did not mark the column at all, creating great potential for ambiguity in their numerals. For example could denote (21 × 60) + (12 × 1) = 1,272, or (21 × 3,600) + (12 × 1) = 75,612, or (21 × 3,600) + (12 × 60) = 76,320, etc. However, it seems that the Babylonians never had a problem here, since the context always made clear which number was meant. It was not until the Persians inherited the Babylonian number system around 400 BC that a sign was introduced to mark an empty column. The sign was so 216,001 is written as The final verdict Compared to non-positional systems, positional systems are relatively rare in history. However, the benefits of adopting a positional system are clear: they are more concise and easier to use. Our own system takes this conciseness to extremes: rather than having a combination of digits making up the value in each position, we have precisely one symbol per position. It is easy to see why the Babylonians didn't adopt this way of doing things themselves: they would need 60 symbols in total, so their system would be rather hard on the memory. Our "single symbol per position" means that our numerals are as concise as they can possibly be. Just compare our way of writing 9,876,543,210 with the Babylonian counterpart The Hindu-Arabic numerals have their own downside however. All our numerals 1, 2, 3, 4, 5, 6, 7, 8, 9, are abstract symbols that have little or no connection with the quantities they represent, other than the connection we impose upon them. The Babylonian way of writing "two" is more visibly connected with the quantity two than the symbol 2. In order to do arithmetic in our numerals, we must be familiar with their rules of operation. We must know that when we add the symbol 1 to the symbol 2, we get the symbol 3, and so on. We need to know our times tables. We need to know that when we add 1 to 9 in a particular column, then we must enter 0 in that column and add 1 to the next column to the left. When listed like this, these rules begin to sound complicated, which indeed they would be to an external observer who has no prior knowledge of our numeral system. We all have a certain facility with these numerals because it is drummed into us from an early age. Arithmetic in Babylonian numerals, on the other hand, is much more mechanical. We can still take the "column-by-column" approach, but we now need only to gather together like symbols and then apply the following two rules: 1) Replace every ten occurrences of by one 2) If six of occur in a particular column, then delete these and enter a into the next column on the left. Hindu-Arabic numerals: much loved and indispensable. After singing the praises of fabulous positional systems, I do not want to diminish the value of our own numerals in any way. The fact is that we find them easy to use — whether this is because we are instructed in their use from an early age, or because they are inherently easy to use is immaterial, at least from a pragmatic point of view. In truth, these numerals are so ubiquitous (more so than the Babylonian numerals ever were) that we would find it extremely difficult to change them, even if we wanted to. It looks like they're here to stay! So next time you're checking your bank balance or dialling a phone number, spare a thought for the centuries of development that have gone into producing these invaluable signs. About the author Christopher Hollings did his degree and PhD in mathematics at the University of York. He is now a post-doctoral researcher at the Centre of Algebra at the University of Lisbon. His main areas of interest are semigroup theory and the history of mathematics. He loves to write, both for professional reasons and for recreation. Other hobbies include reading and photography, as well as trying very hard to speak Portuguese.
{"url":"http://plus.maths.org/content/comment/reply/2342","timestamp":"2014-04-19T15:19:19Z","content_type":null,"content_length":"36806","record_id":"<urn:uuid:93fba8e8-8ab0-45b9-8453-42f09fc5f776>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: st: Identifying the best scale without a "gold standard" Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: st: Identifying the best scale without a "gold standard" From Cameron McIntosh <cnm100@hotmail.com> To STATA LIST <statalist@hsphsun2.harvard.edu> Subject RE: st: Identifying the best scale without a "gold standard" Date Fri, 11 Nov 2011 08:03:48 -0500 You should be using a second-order or bifactor model for this analysis: Koufteros, X., Babbarb, S., & Kaighobadi, M. (2009). A paradigm for examining second-order factor models employing structural equation modeling. International Journal of Production Economics, 120(2), 633-652. Rindskopf, D., & Rose, T. (1988). Some theory and applications of confirmatory second-order factor analysis. Multivariate Behavioral Research, 23(1), 51-67. Chen, F.F., West, S.G., & Sousa, K.H. (2006). A Comparison of Bifactor and Second-Order Models of Quality of Life. Multivariate Behavioral Research, 41(2), 189–225.http://www.iapsych.com/articles/chen2006.pdf Chen, F.F., Hayes, A., Carver, C.S., Laurenceau, J.-P., Zhang, Z. (2011). Modeling General and Specific Variance in Multifaceted Constructs: A Comparison of the Bifactor Model to Other Approaches.Journal of Personality, Accepted. Then the most promising scale might be the one with highest first-order factor loading on the second-order factor (in the second-order analysis), or the one with the lest amount of subscale specific explained variance (bifactor model). But that's mainstream stuff... I would also be curious about what automated search routines might tell you about the most tenable structures for the observed variables. Although it may occasionally be plausible, often the common factor model is something we force on multi-item scales without ever considering alternative generating structures: Landsheer, J.A. (2010). The specification of causal models with Tetrad IV: a review. Structural Equation Modeling, 17(4), 703-711.http://www.phil.cmu.edu/projects/tetrad/ Zheng, Z.E., & Pavlou, P.A. (2010). Toward a Causal Interpretation from Observational Data: A New Bayesian Networks Method for Structural Models with Latent Variables. Information Systems Research, 21(2), 365-391.http://www.utdallas.edu/~ericz/ISR09.pdf Xu, L. (2010). Machine learning problems from optimization perspective. Journal of Global Optimization, 47, 369–401.http://www.cse.cuhk.edu.hk/~lxu/papers/journal/ml-opt10.pdf Tu, S., & Xu, L. (2011a). Parameterizations make different model selections: Empirical findings from factor analysis. Frontiers of Electrical and Electronic Engineering in China, 6(2), 256–274.http://www.cse.cuhk.edu.hk/~lxu/papers/journal/11FEE-tsk-two.pdf Tu, S., & Xu, L. (2011b). An investigation of several typical model selection criteria for detecting the number of signals. Frontiers of Electrical and Electronic Engineering in China, 6(2), 245–255.http://www.cse.cuhk.edu.hk/~lxu/papers/journal/11FEE-tsk-sev.pdf ; Xu, L. (2011). Codimensional matrix pairing perspective of BYY harmony learning: hierarchy of bilinear systems, joint decomposition of data-covariance, and applications of network biology. Frontiers of Electrical and Electronic Engineering in China, 6(1), 86–119.http://www.cse.cuhk.edu.hk/~lxu/papers/journal/byy11.pdf Cam > From: paul.seed@kcl.ac.uk > To: statalist@hsphsun2.harvard.edu > Date: Fri, 11 Nov 2011 11:49:57 +0000 > Subject: st: Identifying the best scale without a "gold standard" > Dear Statalist, > I have six scales, all of which are supposed to measure the same thing (breathlessness) > Factor analysis confirms a single large factor, with different weightings by the different scales. > I can now say in general terms, which scales are best & which worst, but I am > would like to confirm that the observed differences are not due to chance. > Method 1: Extract the main factor & use Richard Goldstein's -corcor- to compare > correlations between the scales & factors > Method 2 : take the simple average of the scales & use -corcor- as before. > This gives me very different answers: > ******** example code **************** > version 11.2 > webuse bg2 > factor bg2cost1-bg2cost6 > * Method 1 > predict f1 > foreach v in varlist bg2cost1- bg2cost5 { > corcor f1 `v' bg2cost6 > } > * Method 2 > gen mean_bgcost = ( bg2cost1+ bg2cost2+ bg2cost3+ bg2cost4+ bg2cost5+ bg2cost6)/6 > foreach v in varlist bg2cost1- bg2cost5 { > corcor mean_bgcost `v' bg2cost6 > } > ******** end example **************** > I am fairly sure that method 2 is better, as > in method 1 there is a circularity about > using the weighted average; and then showing that > the variable with the biggest weighting also has > the biggest correlation. > However, is there also a flaw in method 2? > (apart from the multiple testing issues) > Is there a better approach? > Any thoughts, references, programs appreciated. > Paul T Seed, Senior Lecturer in Medical Statistics, > Division of Women's Health, King's College London > Women's Health Academic Centre KHP > 020 7188 3642. > "I see no reason to address the comments of your anonymous expert ... I prefer to publish the paper elsewhere" - Albert Einstein > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-11/msg00596.html","timestamp":"2014-04-19T03:07:08Z","content_type":null,"content_length":"13741","record_id":"<urn:uuid:206ec2e0-a2c8-40ca-be6e-39fcd5db8159>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: After AP Physics C, you take Modern Physics, right? Can anyone recommend me a good college textbook for Modern Physics? I need the book's name, author, edition. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50e34ad5e4b0e36e35142c3b","timestamp":"2014-04-18T03:56:38Z","content_type":null,"content_length":"37257","record_id":"<urn:uuid:80438869-3c64-47ba-8a6c-b2e774e6d506>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Reengineering Aircraft Structural Life Prediction Using a Digital Twin International Journal of Aerospace Engineering Volume 2011 (2011), Article ID 154798, 14 pages Research Article Reengineering Aircraft Structural Life Prediction Using a Digital Twin ^1Structural Sciences Center, Air Vehicles Directorate, U.S. Air Force Research Laboratory, Wright-Patterson Air Force Base, Dayton, OH 45433, USA ^2School of Civil and Environmental Engineering, Cornell University, Ithaca, NY 14853, USA Received 25 April 2011; Accepted 2 August 2011 Academic Editor: Nicholas Bellinger Copyright © 2011 Eric J. Tuegel et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Reengineering of the aircraft structural life prediction process to fully exploit advances in very high performance digital computing is proposed. The proposed process utilizes an ultrahigh fidelity model of individual aircraft by tail number, a Digital Twin, to integrate computation of structural deflections and temperatures in response to flight conditions, with resulting local damage and material state evolution. A conceptual model of how the Digital Twin can be used for predicting the life of aircraft structure and assuring its structural integrity is presented. The technical challenges to developing and deploying a Digital Twin are discussed in detail. 1. Introduction Despite increasing capability to understand relevant physical phenomena and to automate numerical modeling of them, the process for lifing aircraft structure as outlined in Figure 1 has not advanced greatly in fifty years. The external loads on an aircraft (aerodynamic pressures and ground loads) are developed by the loads group using a specialized model and placed in a database. The loads for selected design points are pulled from the database by the structural modeling group who then apply them to the structural finite element model (FEM) to develop the internal loads in the airframe for each design load case. These cases are placed in a second database. The durability and damage tolerance experts use these internal load cases to develop stress transfer functions relating the external loads to local stresses at details such as fastener holes, cutouts, and fillets. The stress transfer functions are applied to the loads in the flight loads database to develop a stress spectrum at each point of interest in the airframe. These stress spectra are used in specialized fatigue software together with an idealized local geometry to predict the fatigue crack nucleation or fatigue crack growth lives of details that have been identified as fatigue sensitive. Meanwhile, the dynamics group uses yet another specialized model to determine the vibration characteristics of the aircraft to address the fatigue of the structure due to low-amplitude, high-frequency dynamic loads such as acoustic and aeroelastic. Increased computational horsepower has enabled each of the individual parts of this process to be performed more efficiently, and so more load cases and fatigue locations can be analyzed. The output files from one model are more readily translated into input files for the next step of the process. However, there has been little effort made to integrate the physics in these individual models into a single comprehensive representation of the aircraft. Nor has the fidelity in the models of the physics increased significantly. The structural FEM is still a linear elastic model. The fatigue life prediction models are essentially the same ones that were used when the calculations were performed on hand-held calculators. As a result, the identification of fatigue sensitive or other damage prone locations relies primarily on engineering judgment and fatigue tests. Because of judgment born of experience, evolution of existing aircraft designs is fairly successful; however, the development of revolutionary aircraft is fraught with unexpected problems that result in weight growth, schedule delays, and cost overruns. The life prediction process depicted in Figure 1 was the best that could be done in the years before digital computing was commonplace. The process was used to design the airframe to meet the design service life requirement and to establish the intervals at which to inspect locations on the airframe for fatigue cracks. The coming of the computer age led to the automation of various parts of this traditional life prediction process. However, the entire aircraft life prediction process has not been reexamined to see how the entire process can be reengineered as a result of the availability of high performance computing. This paper presents a proposal for reengineering the structural life prediction process, hereafter called the Digital Twin, that utilizes the growing very high performance computational infrastructure. The Digital Twin concept developed from discussions the primary author had with T. A. Cruse, Professor Emeritus at Vanderbilt University and former Chief Scientist of the Air Force Research Laboratory, and A. R. Ingraffea, Professor at Cornell University. The paper outlines the details of the Digital Twin concept and then discusses the technical challenges that today face the realization of the Digital Twin process. 2. The Digital Twin Concept Consider the following hypothetical capability. In 2025, the United States Air Force (USAF) takes delivery of the first of a new model of aircraft, tail number 25-0001. Along with the physical aircraft, the USAF also receives an as-built digital model of this particular aircraft, designated 25-0001D/I. 25-0001D/I is a 1000 billion degree-of-freedom (DOF) hierarchical, computational structures model of 25-0001. This “Digital Twin” is ultrarealistic in geometric detail, including manufacturing anomalies, and in material detail, including the statistical microstructure level, specific to this aircraft tail number. 25-0001D/I accepts probabilistic input of loads, environmental, and usage factors, and it also tightly couples to an outer-mold-line, as-built, computational fluid dynamics (CFD) model of 25-0001. 25-0001D/I can be virtually flown through a 1 hour flight in 1 hour on an exaflop-scale high-performance computer. During each such virtual flight, 25-0001D/I accumulates usage damage according to the best-physics-based, probabilistic simulations, and outputs about 1 petabyte of material and structural performance and damage data. 25-0001D/I is “flown” for 1000 hours during the design, development, and initial testing of 25-0001. During this accelerated, preliminary “testing”, a number of unexpected failure modes are uncovered that would lead to loss of primary structural elements, with 2 incidents projected to result in the loss of the aircraft. Appropriate repairs, redesigns, and retrofits are planned and implemented on 25-0001 before its first flight to preclude such events from actually occurring. It is recognized, however, that design-point usage is always trumped by actual usage, involving unplanned mission types and payloads. Therefore, a second digital instantiation, 25-0001D/A, is linked to the structural sensing system deployed on 25-0001. This sensor system records, at high frequency, actual, six-DOF accelerations, as well as surface temperature/pressure readings during each actual flight of 25-0001. Each hour of real flight produces about 1 petabyte of such real data. This data are input into the 25-0001D/A structural model, and this model itself becomes a virtual sensor, interpolating sparse acquired data over the entire airframe. Using Bayesian statistical techniques, 25-0001D/I is periodically updated to reflect actual usage of 25-0001 and recorded in 25-0001D/A. 25-0001D/I is rerun for forecasting the remaining useful life of 25-0001, and for updating reliability estimates for all primary structural components. This prognosis leads to time-and-budget-appropriate execution of maintenance, repair, and replacement plans resulting from such updated useful life and reliability estimates. This process is to be executed for all aircraft of this type in the USAF inventory. The Digital Twin is a reengineering of structural life prediction and management. Is this science fiction? It is certainly an audacious goal that will require significant scientific and technical developments. But even if only a portion of this vision is realized, the improvements in structural life prediction will be substantial. Now consider the concept of operation for the Digital Twin process in detail. This will make apparent the technical challenges that need to be overcome to enable the Digital Twin process. 3. Operation of the Digital Twin The operation of the Digital Twin for life prediction would proceed as diagramed in Figure 2. The Digital Twin is represented in the figure as a single entity, but it may consist of several components that are intimately linked, such as a thermal/heat transfer model, a dynamics model, a stress analysis model, and a fatigue model. A mission or series of missions is assigned to a particular aircraft. A reasonable estimate of the flight trajectory and maneuvers that will be flown during the mission is established. The CFD model of that specific aircraft is then “flown” virtually through those flights to estimate the loads and environments the aircraft will experience. As the aircraft is being “flown”, aerodynamic pressures on the aircraft are applied to the structural Digital Twin FEM over the time interval of the flight. The CFD and FEM models are closely coupled so that the effect of aeroelastic vibrations and structural deflections on the aerodynamic flow, if any, can be captured, and vice versa. Traditionally, any effect between structural deflections and aerodynamic flows has been discounted, but for a realistic high fidelity model, the possible interactions between physical phenomena should not be ignored from the start. The structural FEM models the entire range of physics acting on the structure: thermodynamics, global aeroelastic vibrations, and local deformation, both quasistatic and dynamic. The time history of the thermal and stress fields throughout the aircraft is developed for the entire virtual flight. The Digital Twin has full knowledge of how the aircraft has been flown previously and the condition of all structural components in terms of material state and damage at the start of the virtual flight. With these two sets of information, the damage models embedded in the Digital Twin forecast the evolution of material states and the progression of damage during the virtual flight. Damage is not limited to fatigue cracking, but includes creep, fretting and wear, delamination and microcracking in composite materials, corrosion and oxidation, and panel buckling among others. The anticipated probability distribution of remaining useful life for the aircraft is output at the completion of the virtual flight. Many of the physical phenomena discussed above are nonlinear. As a result, the structural FEM must perform nonlinear analysis. Since the evolution of the material state and the development of damage affect the stiffness of structural components, the coefficient of thermal expansion, and the load at which inelastic deformations begin, among other properties, the material state evolution and damage models must pass information back to the “stiffness matrix” of the FEM so that the thermal and stress fields are accurately determined. In addition, the material state and damage models must communicate with each other so that possible synergistic effects are captured. Of course, there will be uncertainty about how well the virtual flight reflects what will really happen during the actual flight. In addition, there will always be incomplete information about the properties of the materials used, the quality of the fabrication and assembly methods, and so forth. The Digital Twin will translate these uncertainties in inputs into probabilities of obtaining various structural outcomes. The likelihood of the airframe satisfactorily surviving the demands of the mission can then be factored into whether to send that particular aircraft on that particular After the physical aircraft flies the actual mission, data about the flight will be downloaded from the aircraft tracking and structural health management (SHm) systems. The aircraft tracking system records sufficient flight parameter data to accurately describe the flight within the CFD Digital Twin. Using the flight parameter history for the actual flight, the Digital Twin is “flown” through the actual mission, and the probability distribution of the actual remaining life is computed. In addition to having sensors for damage detection at critical locations, the SHm system also senses and records strain histories at select locations during the flight. The strains in the Digital Twin at the selected locations are compared to the strains recorded on the physical aircraft. The damage state at locations in the Digital Twin corresponding to aircraft locations with SHm sensors is compared to the condition detected by the SHm system. Differences between the Digital Twin and the condition of the aircraft are resolved with a formal mathematical process such as Bayesian updating. In this way, the Digital Twin undergoes continuous improvement and becomes more reliable the longer the aircraft is in service. The use of the Digital Twin is not limited to decisions about a single flight. The usage for a particular aircraft can be projected forward for any desired length of time. The Digital Twin can be flown virtually through all the flights corresponding to the projected usage to forecast the maintenance needs and repair costs for the aircraft during that time. This can be done with the Digital Twin for every aircraft in the fleet to estimate the sustainment needs of the fleet for that period of time. With updating to reflect repairs and part replacements, the Digital Twin can also be used for configuration control of individual aircraft. The prediction of the remaining useful life of aircraft structure with the Digital Twin requires modeling of the response of the structure to all of the applied loads: quasistatic maneuvering aerodynamic loads, high-frequency sonic and dynamic loads, and thermal fluxes. The time history of the structure’s response to these forcing functions must be simulated in order to properly evolve the damage state of the structure with known levels of uncertainty. The damage state at numerous points in the structure must be tracked and updated with each flight. Achieving these requirements is not possible without some significant technical developments in the areas of multiphysics modeling; multiscale damage modeling; integration of the structural and damage models; uncertainty quantification, modeling, and control; manipulation and updating of large databases; high resolution structural analysis. The technical developments required in each of these areas are discussed in the following section. 4. Needed Capabilities and the State of the Art 4.1. Multiphysics Modeling Several commercial structural analysis packages provide the ability to predict thermodynamic response of structures subjected to transient thermal fields, as well as temporally and spatially varying tractions and boundary conditions. These commercial codes perform well for thermal-structural problems that can be addressed using one-way analyses where the thermal stress solution can be computed adequately by performing a thermal analysis followed by a structural analysis using the temperatures derived in the thermal analysis. However, this approach is a simplified model of the actual physics that may not be appropriate for all flight domains. An example of where two-way coupling may be needed is engine exhaust-washed structure. To prevent “line-of-sight” into the elevated temperature regions of embedded engines, some recent aircraft have the engine exhaust flow over a portion of the airframe in order to give the hot exhaust time to diffuse. This portion of the structure is exposed to a severe thermal-acoustic environment. As the panel deforms out of plane, it is more exposed to the engine exhaust resulting in greater heating. The increased temperature of the skin can increase the out-of-plane deformation further, as well as large changes in the vibration response, resulting in more heating of the panel. The deformation of panel into the exhaust stream can cause separation of the flow from the panel or transition-to-turbulent flow which further affects the heating of the skin. The complex interaction between the physical phenomena cannot be simulated in an one-way coupled analysis. Two-way coupled methods may be loosely or tightly coupled. Loosely coupled or “partitioned” approaches use two distinct solvers as in the one-way coupling, but now information is passed between solvers at fixed time increments. A tightly coupled, or “monolithic” approach, solves the thermal-structural equations in a single domain via a unified treatment. In general, a tightly coupled method converges more rapidly than the loosely coupled one due to a consistent system tangent stiffness. The disadvantage in the tightly coupled approach is the requirement of additional memory to solve a larger system of equations and accompanying possible ill conditioning of the associated matrices [5]. The most severe challenge for two-way coupling of the physics is the different temporal and geometric scales of the different physics, especially for transient events. One strategy for solving multiscaled, multiphysics problems is to partition the solution domain along disciplines. Separate techniques can be used for each physical domain at each time interval, while loads, tractions, and other information are exchanged through an interface. Each physical phenomenon has a solver tuned to its particular time and spatial scaling requirements. The key issues with this approach are maintaining coupling and guaranteeing stability, accuracy, and convergence through the interface. The interface needs to apply feedback forces and tractions at the appropriate time and spatial scale without the loss or creation of energy. The interface also has to be constructed so that one physical phenomenon is coupled to another without regard to the peculiarities of the individual solvers. An example of such an interface method is the common refinement scheme proposed by Jaiman et al. [5]. This method is particularly suited for the mismatched meshes that occur naturally with disparate physical domains. If the interfacial data are not parceled properly with the case of mismatched meshes, accuracy errors can occur. The common refinement scheme integrates over the subintervals to eliminate this error. The scheme allows for the possibility of exchanging the solver of either field. For instance, the structural solver could be changed from a full finite element model to a reduced-order model, without influencing the implementation of the acoustic solver. An example of another solution technique is illustrated in Figure 3 for coupled aerothermoelasticity. This particular problem is partitioned into aerothermal and aeroelastic domains, as shown in Figure 3, where the arrows denote the flow of information between the disciplines. This partitioning approach takes into account the differences in required time steps and also allows for different time marching schemes for the different solvers [6]. As shown in Figure 4, the aeroelastic model requires much smaller time steps, , while the aerothermal model updates the structural temperature distribution at much larger timesteps, . A disadvantage of the staggered procedure is that accuracy is reduced with increased time. Another promising method is the combined interface boundary condition (CIBC) procedure [7]. The CIBC procedure appears to have better stability and accuracy characteristics than the staggered scheme, thus addressing two of the main issues of time stepping with partition solvers. Structures contain regions where locally large gradients occur both spatially and temporally due to the loading. This is also a multiscale issue, as the physical phenomenon causing the gradient occurs at a different scale than is of interest in the structural scale analysis. Conventional FEM practices require increasing the degrees of freedom (DOFs) in regions where spatial resolution is desired and a decrease in the time increment when an increase in the temporal resolution is required. However, as noted in [8], “the tyranny of scales will not be defeated simply by building bigger and faster computers. Instead, we will have to revamp the fundamental ways we conceive of scientific and engineering methodologies, long the mainstays of human progress”. Increasing the number of DOFs might lead to convergence with respect to the energy norm (); however, the same is not true concerning local error. In recent work by O’Hara et al. [9, 10], a 100% error in the maximum temperature was obtained for a converged conventional FEM solution for a problem involving a sharp thermal gradient as shown in Figure 5. Structure upon which a shock wave impinges could experience a sharp thermal gradient similar to this. Conventional techniques are unable to recover the imposed boundary conditions in regions of high gradients. This influences the displacements, stresses, and strains resulting in incorrect life and risk estimates. The “tyranny of scales” is more problematic for combined physics problems such as fluid-thermal-structure interactions and where long time records are desired. Recall that the Digital Twin will require simulating a flight of an hour or more in actual time. Methods being developed to specifically address the multiscale analysis of steep gradients include: the generalized finite element method (GFEM), the space-time method, and reduced-order models (ROMs). Each method offers the capability to introduce necessary physical phenomena at a level of accuracy that cannot be achieved with conventional finite element methods for structural scale analysis. All three computational methods are active research areas for fluid, thermal, and structural disciplines, providing a commonality of approach across disciplines. All three methods promise the advantage of reducing model complexity and computational time. GFEM uses a priori knowledge of the solution to augment the standard FE shape functions with “enriched” shape functions that capture the physical phenomenon of interest. GFEM uses concepts from the partition of unity [11], where standard FE shape functions are enriched in the master element allowing for the use of existing infrastructure and algorithms from classical finite element methods. GFEM has been applied to fracture mechanics [12], thermal analysis [3, 9], and CFD [13] problems. However, three research issues remain in order for GFEM to address real aerospace situations. The thermal example in Figure 5 is a simple problem with a static gradient. In general, the gradients in the structure are changing in magnitude and location during a flight. Therefore, different elements in the FEM will need enrichment at different times during a flight. Research is also needed on the type of enrichment functions that are best able to capture the gradients for computational efficiency. Finally, given an accurate representation of a thermal gradient, how should the structural shape functions be enriched to accurately calculate strains and stresses in the region if the thermal gradient is unknown? Space-time methods assume that the time dimension can be treated in the element domain using interpolation functions as is customary in the spatial domain; that is, approximations are established in both space and time. The first space-time application can be traced to Oden in 1969, where it was applied to wave propagation in a bar [8]. More recently, space-time methods have been applied in formulating a fully coupled framework to address acoustic-structure interaction as well as many other applications requiring self-adaptive solution strategies to track transient waves propagating spatially and temporally, such as those occurring in fluid-structure interactions [14]. The algorithm is unconditionally stable and is not limited by the critical time step. Finally, the space-time method provides a framework for improving the solution accuracy through enrichment. This is particularly promising for history-dependent life prediction. While space-time methods show promise in addressing transient dynamic phenomena, there are disadvantages associated with the method. Besides adding complexity in the element formulation, the size of the global stiffness matrix is , where is the number of interpolation points in time. For example, a linear element in time would require four times the global stiffness size of a semidiscrete time formulation and nine times the size for a quadratic formulation. This greatly increases the computational costs associated with the space-time procedure and may no longer provide a benefit for many typical problems solved with finite elements. ROMs reduce the computational burden of direct time integration of an FEM with random dynamic loading prescribed, while retaining the necessary accuracy. Direct time integration of a standard, large-order FEM for a mission-scale length of time can be cost and time prohibitive. The use of nonlinear structural ROMs, particularly for acoustic fatigue type response, is well documented in the literature [15–18]. Further, adaptation of nonlinear structural ROMs to coupled thermal and deformation problems through the use of “cold modes” has been proposed [13, 14, 19]. These schemes use the linear mode shapes of the structure in the so-called unstressed or cold state to span the response at different temperature states. The linear portions of the structural ROM become functions of temperature while the nonlinear terms remain constant. Falkiewicz and Cesnik [20] describe a thermal ROM based on proper orthogonal decomposition and the method of snap shots that was used to update the temperature distribution in the structural ROM via the thermal flux. Both the structural and thermal ROMs were linear. Coupled ROMs have also been used to create a unified aeroelastic and flight dynamic formulation [21]. A ROM will likely be needed to provide the aerodynamic loads over an entire flight. CFD is normally used to simulate an aircraft in cruise conditions, not as it accelerates and maneuvers [22]. While some non-steady-state flight conditions such as recovery of an F-18 over a carrier deck [23] and a cargo airdrop from a C-130 [24] have been simulated with overset grid methods [25], longitudinal accelerations of an aircraft are still a challenge. In CFD, velocity changes to the fluid flow are introduced at one side of the meshed region and propagate across the region with time. However, when an aircraft changes airspeed, the air surrounding the aircraft changes speed relative to the aircraft at once. 4.2. Multiscale Damage Modeling The Digital Twin will mimic the behavior of its actual aircraft as much as possible. Assuming damage of specific sizes at all the critical locations, as is done in damage tolerance requirements, is not in keeping with this intent. Damage tolerance checks for slow crack growth or fail safety will still be performed to ensure safety. However, only known damage will be placed in the Digital Twin for life management purposes. If damage is not known to exist at a location, a distribution of damage forming features, such as those in Figure 6 for fatigue cracks, will be assumed at that location. The distribution and type of damage forming features will be dependent upon the material and fabrication process. Physics-based models for how damage forms from these features will need to be integrated into the structural FEM. Figure 6 is a schema for the stochastic information flow back and forth between the material and structural scales of a nondeterministic life prediction for fatigue cracking. Figure 6 shows the conceptual data flow through the use of actual component- and material-scale imagery and actual and surmised associated statistics [26, 27]. An assessment of reliability against a fracture limit state starts with a proposed component-scale structural design, Figure 6. A stress analysis is performed, and fatigue “hot-spots” located in the component. This assessment is stochastic, because it has to consider uncertainties in boundary conditions, geometry, usage, and environment. Statistically representative microstructure-scale volume elements, such as that depicted in Figure 7, are then created at each hot spot. As shown in images in Figures 6 through 6, the physics and mechanics of the incubation, nucleation, and microstructurally small fatigue crack (MSFC) processes and events must be known, incorporated in predictive software and used to produce the relevant statistics for each stage of crack growth. In this particular example, the physics and mechanics of incubation and nucleation in 7075-T651 are associated with Al[7]Cu[2]Fe constituent particles, and a set of 4 such particles are hypothetically tracked through the cracking events and processes. At any given stage, those particles outlined in red are participating in that stage; particles outlined in black are inactive. Incubation is the period before a crack in a particle penetrates into the aluminum matrix. Nucleation is when the crack in a particle first starts in the matrix as indicated by the arrows in the upper photographs in Figure 6. MSFC propagation occurs as the crack grows away from the particle and navigates across a number of grains to a size at which traditional component scale fatigue models become applicable. By the image in Figure 6, a distribution of MSFC growth rates has been produced. MSFC propagation is governed by (as yet not well known) rules for crack growth rate, direction, shape, grain boundary interaction, and coalescence. Screening determines which of the nucleated cracks grow at a rate below a threshold and would deem these microcracks inactive, while the other, active, microcracks would be further considered for long crack growth simulation. Traditional, structural-scale fatigue crack growth methods are then applied to predict the remaining number of cycles to failure. Using the statistics accrued through the course of the modeling approach, a distribution of life, or reliability, is finally determined, Figure 6. Similar schemata need to be developed for other damage processes and for other materials such as composites or hybrids. The multiscale physics models for thermomechanical fatigue, creep, fretting and wear, corrosion and oxidation, and delaminations and microcracking of composite materials need to be developed for aircraft structural materials. The possibility of synergistic interactions between damage mechanisms at all scales must also be explored. The effort that went into developing the level of understanding of fatigue crack formation in 7075-T651 depicted in Figure 6 was phenomenal by any standard. Developing these models for all of these other damage mechanisms and for every aircraft structural material in any reasonable period of time will require an investment on the order of the DARPA Accelerated Insertion of Materials program [28]. 4.3. Integration of Structural FEM and Damage Models It is important to understand that each stage of crack growth, or other damage process, is driven by stochastic driving forces computed at the component scale and that the component-scale model itself will periodically have to be updated to account for material-scale damage that reduces local stiffness resulting in redistribution of local stress and thermal fields. The need for a stochastic, multiscale simulation capability that integrates material performance with structural response becomes obvious. Employing traditional cycle-based fatigue models in the Digital Twin, besides adding significant memory and storage requirements to an already large model, will contribute to uncertainty by allowing updates to the damage state of the structural model only after the completion of a stress cycle. When the damage state at a location is finally updated, there is a step function change in the local stiffness and resulting redistribution of stress. However, because damage has been developing in the physical aircraft all during the stress cycle, the stiffness change has been continuous and gradual, as has the redistribution of stress. Thus the damage from the stress cycle may not be as much as the cycle-based model would predict because the stress redistribution had reduced the amplitude of the stress cycle. Cycle-based models are a historical remnant of the time when experimental and numerical modeling capabilities were more constrained than today. With the ability to record video images at scanning electron microscope scales and process those images to measure small displacements, fatigue crack growth increments can be determined more frequently than every few hundred cycles. Improvements in the hardware and software for numerical modeling make it easier to solve differential equations for which the time discretization is smaller than a stress cycle. There is no reason why models relating the time rate of change of damage to the time rate of change of the stress cannot be developed today. Small time scale fatigue crack growth models have been proposed that compute the increment of crack growth at any time instant [29–31]. These models were developed from the standard cyclic fracture mechanics relationship for crack growth. The models were able to represent cyclic crack growth data under constant and variable amplitude loading without having to count cycles or track crack closure levels. Such models will facilitate the integration of damage models into structural FEMs. The challenge of establishing a two-way coupling between a structural FEM and a damage model still remains even if the damage model is time based. One approach to integrating a material damage model into a structural FEM is continuum damage mechanics. Continuum damage mechanics approaches the local failure of materials by introducing internal state variables to quantify the local degradation at a material point. The effect of this degradation is included in the stiffness equations for the material in the FEM. Successful application of continuum damage mechanics is dependent upon the formulation of the evolution equation for the state variables. Void growth models are the most common damage evolution models [32], but continuum damage mechanics has been applied to subcritical crack growth under monotonic loading [33]. While the evolution equations can be formulated using a phenomenological procedure, a multiscale homogenization procedure using the physics-based damage models discussed above would be preferred. Partition of unity methods, such as GFEM and extended FEM (XFEM), are also promising approaches for integrating damage into the structural FEM. For instance, these methods allow the representation of discontinuities and singularities via geometric descriptions of the crack surfaces, which are independent of the volume mesh, coupled with suitable enrichment functions [34]. In other words, a single finite element mesh suffices for modeling as well as capturing the evolution of material boundaries and cracks because the finite element mesh does not need to conform to internal boundaries. Both approaches have been successful at integrating damage into a structural FEM when there is a single dominant damage driver, typically stress or strain. Simulation of damage development in more complex situations involving varying temperature, environment, and stresses is currently limited by the development of suitable damage state evolution equations. 4.4. Uncertainty Quantification, Modeling, and Control The purpose of quantifying uncertainty in a model is to exercise some control over the magnitude of the uncertainty [35, 36]. The magnitude of the uncertainty can be controlled by the choice of model fidelity and scale. It is not given that the finest analysis scale and highest fidelity models decrease uncertainty in the simulation results sufficiently to justify their cost. There may be input parameters, such as the applied loading, whose uncertainty overrides all of the other uncertainties in the model. Higher fidelity models and grain-scale analyses will not decrease the uncertainty associated with the applied loading. Computational cost increases as finer scales are analyzed and higher fidelity models are used. The choice of what fidelity and scale to use in a simulation should be based upon the acceptable level of uncertainty and the computational cost to achieve it. The ideal situation is to know the impact of scale and fidelity choices on uncertainty prior to performing any simulations. Determining, after the effort, that a simulation did not provide an acceptable level of uncertainty is of less benefit. The cost of one simulation has already been incurred, and now another simulation of finer scale and higher fidelity will need to be performed with no guarantee that it will provide an acceptable level of uncertainty either. Sampling methods, such as Monte Carlo simulations and variations thereof, are commonly used to determine the uncertainty/variability in results. However, performing Monte Carlo simulations with different realizations of a model of an entire aircraft over a complete flight brings significant computational cost. Modeling and simulation of an entire airframe is computationally intensive even with today’s highly idealized elastic models at selected points in the flight spectrum. A detailed, nonlinear simulation of an entire flight for a complete airframe amplifies the computational time by orders of magnitude. Monte Carlo simulations would require analyses of tens, if not hundreds, of realizations “flying” the same mission in order to obtain just the first and second moments of the distribution for each output variable. And, while sampling methods are readily parallelized, more sophisticated probabilistic methods such as stochastic finite element methods (SFEMs) might be better than sampling methods. SFEM comprises three basic steps: discretization of the stochastic fields representing the uncertain input parameters, formulation of the stochastic matrix at the element and then at the global level, and, finally, the response variability calculation [37]. There are two main variants of SFEM: the perturbation approach based on a Taylor series expansion of the response vector and the spectral stochastic finite element method, where each response quantity is represented by a series of random Hermite polynomials. Each of these variants has issues in terms of computation effort that make application to large-scale non-linear systems, such as an aircraft, currently prohibitive. There are other developments in SFEM that hold promise. The first is stochastic reduced basis methods (SRBMs) where the response process is represented using basis vectors that span the preconditioned, stochastic, Krylov subspace [38]. SRBMs are computationally efficient as compared to polynomial chaos expansions (PCEs) at a comparable level of accuracy, and so are better suited for solving large-scale problems. The computational costs of simulations using PCEs are such that solutions have been limited to uncertain systems with a small number of degrees of freedom. The SRBM formulation is limited to the analysis of random linear systems at this time, as is the PCE formulation. Furthermore, the basis vectors are problem dependent which limit the ability to develop a general approach. The second development is nonintrusive SFEM approaches [39]. These approaches take advantage of powerful existing deterministic FE codes by building a surrogate response surface model using PCEs. The ability to use any third party FE code speeds the development and transition of SFEM by removing the need to develop an entire SFEM system from scratch. The investments in deterministic FE codes can be leveraged. Other developments of interest include multiscale SFEM [40] and an extended SFEM (X-SFEM) analogous to XFEM for deterministic FEM [41]. Multiscale SFEM seeks to propagate uncertainty information in fine scale quantities, such as microstructure, to coarse scale quantities, such as stiffness and strength that are functions of the fine scale quantities. X-SFEM provides for the propagation of geometric uncertainties in the solution of partial differential equations, that is, PDEs defined on random domains. 4.5. Manipulation of Large, Shared Databases A model of an entire airframe is, by itself, an enormous database that is difficult to input, maintain the integrity of, and manipulate. The basic geometry and assembly of components for the airframe can be established with a CAD system. Discretization of the individual components can be challenging, especially for large, detailed structural components. The integrity of the geometry and the discretization of this large complex model must be established and maintained over the life of the model. The discretization must be adaptable in order to adequately model the insertion of unexpected damage and subsequent repairs that occur during the service life of an aircraft. These tasks will likely need to be automated as current manual methods are not up to the Digital Twin challenge. The information generated by performing flight-by-flight simulations over the entire design or service, life of an aircraft will be voluminous. In fact, manipulation of just the information from the simulation of a single flight for an entire air vehicle stretches current capabilities. For the Digital Twin, the results from the simulation of every flight during the life of the aircraft must be kept available. In order to use this information for making decisions about the continuing airworthiness of the vehicle, all of the information contained in the simulation results must be accessible. Rapid, focused interrogation of the database to support specific decisions must be possible. Some interrogations will need to be automated. For instance, it is not practical to manually search through an entire aircraft, physically or virtually, to locate damage hot spots. To visualize 1% of 1 petabyte of data envisioned as the output of a virtual flight will take 35 workdays at the current rate of 10MB/s [4]. The Digital Twin will need to automatically identify locations with prescribed levels of damage and present this information in a user-friendly way. 4.6. High-Resolution Structural Analysis Capability Simulation-based design and certification will require very high performance computing, performance far beyond what is commonly used for aircraft structural analyses today. As envisioned, the Digital Twin of a complete airframe will have on the order of 10^12 degrees of freedom. If multiscale models of the microstructure, such as in Figure 7, are required at some locations, these models will have on the order of 10^7 degrees of freedom at each location [26]. Despite its large size, the Digital Twin must execute with sufficient speed so that the modeling and simulation can keep up with the actual usage of the aircraft, that is, a 1-hour flight must be simulated in 1-hour of clock time or less. If simulations are unable to stay ahead of the actual aircraft, the power of the Digital Twin for life prediction and decision making is lost. Clearly, very high performance computing will be needed to meet the vision of the Digital Twin. But high-performance computing (HPC) is a relative term. In the next several years, petaflop-per-second power will become available. Within a decade, it is expected that exaflop-per-second computers will become available: “extrapolation of current hardware trends suggests that exascale systems could be available in the marketplace by approximately 2022 via a “business as usual” scenario. With the appropriate level of investments, it may be possible to accelerate the availability by up to five years, to approximately 2017” [4]. The teraflop-per-second scale computing of today is effectively infinite computing power by the standard of many engineers. The problem is not the availability of high-performance computing hardware, but rather the usability of it through the appropriate codes and other software tools. A recent U.S. Department of Defense survey [42] found that the average age of commercially available finite element software is about 20 years and that the typical maximum number of processors such tools could effectively access was about 300! The gap between hardware capability and software performance is recognized by the HPC community [4]: “advanced and improved simulations demand significant advances in mathematical methods, scalable algorithms, and their implementations. The required advances are driven by the increased complexity of the problems, involving multiple and coupled physics models, high dimensionality described by large numbers of equations (PDEs, ODEs, DAE, geometric descriptions and boundary conditions, optimization, etc.), and by the huge time and spatial scales” The Digital Twin is typical of what is called an “E3 Application”, a complex system in high-dimensional spaces. The HPC community has identified certain application characteristics and math and algorithms needs (Figure 8) which have already been discussed for the Digital Twin. Within the context of the Digital Twin, there is a need to solve coupled PDEs, quantify uncertainty, design and optimize the structure, while handling large and noisy data. The HPC community has accurately described the computational problems of the Digital Twin. Solving very large systems of PDE’s with uncertainty everywhere is at the core of the Digital Twin concept. DoD has taken steps to address this need through their Computational Research and Engineering Acquisition Tools and Environments (CREATE) Program. Specifically the air vehicle portion has developed a fixed wing virtual aircraft simulation tool called Kestrel [43]. Kestrel integrates multiple single executable modules. The most significant for the purposes of the Digital Twin are a CFD solver and a linear modal representation of the aircraft along with fluid-structure interfacing operations. Aerodynamic loads are applied directly to the structure. However, the stresses in the structural components still need to be determined to enable life prediction. 5. Developing the Digital Twin There are many challenges that must be overcome in developing the Digital Twin. It is difficult to put together a comprehensive Digital Twin development plan that covers a decade or more of activities. However, the initial, path-finding work that has been done and that is planned for the near future will be discussed. The Air Vehicles Directorate at the U.S. Air Force Research Laboratory has been investigating a ROM for obtaining aerodynamic loads on the aircraft or internal stresses, from pilot inputs either in the actual aircraft or in a flight simulator. This activity developed out of work to streamline the clearance for external stores on an aircraft [21, 44]. The integration of this stick-to-stress ROM into structural life prediction will be investigated as part of a program to demonstrate the potential of higher fidelity stress history, structural reliability analysis, and structural health monitoring for improving the management of airframes. Two full scale fatigue tests of aircraft assemblies will serve as surrogates for actual flying aircraft in this program. The result of this program will be an initial, low fidelity “Digital Twin”. Additional spirals of development will increase the fidelity of this Digital Twin by incorporating new technologies as they mature. One of the technologies that may be ready in time for the second development spiral includes physics-based models for damage development and progression from NASA’s Damage Sciences [45] and the Air Force Office of Scientific Research’s Structural Mechanics programs. Another may be the coupling of different physics models, that is, thermal, dynamic, and stress. This topic is being actively worked by the Air Vehicles’ Structural Sciences Center [1, 2]. A third technology is the digital thread manufacturing technology [46] for the F-35. The digital thread makes it easier to see how the information necessary to construct tail number-specific structural models can be collected. In the digital thread, the same 3D solid models from engineering design are used in manufacturing for numerically controlled programming, coordinate measurement machine inspections. Laser measurements are used with the digital thread to virtually mate parts in order to identify potential fit-up problems prior to actually mating the parts. The digital thread, in addition to the F-35 production rates, has enabled Lockheed to use automated hole drilling in many places. Therefore, to within the accuracy of the production measurement systems, the dimensions of many of the detail parts and the location of many of the fastener holes were known at one time during production. It becomes a matter of supplying that information as initial conditions to the Digital Twin for an aircraft as it enters service. 6. Advantages of a Digital Twin In the current life prediction process for aircraft, each type of physics has its own separate model. There is the computational fluid dynamics (CFD) model, the structural dynamics model (SDM), the thermodynamic model, the stress analysis model (SAM), and the fatigue cracking model (FCM). Computational capabilities have restricted what physics and damage models are considered during the life prediction process. Information is passed between the physics models by writing the results from one model to a file, translating that output file into an input file for the other model, and finally reading the input file into the second model. This process makes it difficult to develop a synchronized stress-temperature-chemical (STC) loading spectrum. Furthermore, the effect of damage development on the stress or temperature history is not considered. The approach has been to assume some appropriately severe conditions during the design and subsequent usage tracking for an aircraft. Such an approach is usually conservative, but leads to an air vehicle that is heavier than it may need to be and inspections more frequently than may be needed. With the Digital Twin, the SDM, the SAM, the FCM, and possibly other material state evolution models would be integrated into a single unified structural model that is tightly coupled to a CFD Digital Twin. The physics involved would be seamlessly linked, the way that physics is linked in the physical structure. The joint STC loading history for the aircraft will directly result from the simulation of the flight. This joint spectrum can be found for any location in the structure and will not rely on an idealized transfer function. As damage develops within the structure, the local STC spectrum will naturally adjust for the presence of damage. It will not be necessary to assume the repetition of a statistically representative spectrum over the lifetime of the vehicle either; the STC spectrum can evolve as the usage of the vehicle and the age of the structure dictate. Damage findings, repairs, replacements, and structural modifications, if they are recorded at all, are presently maintained in a database separate from the structural analysis models. It is not clear that this database is consulted when updating the remaining useful life of an aircraft. Such information is certainly not stored in a format that facilitates its use in a structural analysis model. The Digital Twin would provide a visual database that is directly related to both the structural model and the physical aircraft. Therefore, in addition to providing a structural life prediction tool, the Digital Twin also facilitates configuration control for an individual aircraft. The Digital Twin will enable better management of an aircraft throughout its service life. Engineers will have more information about the condition of the aircraft and have it sooner. This will allow better maintenance decisions to be made in a timely manner. This paper was cleared for public release—Distribution A—on 29 July 2011 as case 88ABW-2011-4181. 1. B. A. Miller, J. J. McNamara, A. J. Culler, and S. M. Spottswood, “The impact of flow induced loads on snap-through behavior of acoustically excited, thermally buckled panels,” in Proceedings of the 51st AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Orlando, Fla, USA, April 2010, AIAA-2010-2540. 2. A. J. Culler, A. R. Crowell, and J. J. McNamara, “Studies on fluid-structural coupling for aerothermoelasticity in hypersonic flow,” in Proceedings of the 50th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Palm Springs, Calif, USA, May 2009, AIAA-2009-2364. 3. R. Merle and J. Dolbow, “Solving thermal and phase change problems with the extended finite element method,” Computational Mechanics, vol. 28, no. 5, pp. 339–350, 2002. View at Publisher · View at Google Scholar 4. H. Simon, T. Zacharia, and R. Stevens, Modeling and Simulation at the Exascale for Energy and the Environment, Office of Science, U.S. Department of Energy, 2007, http://www.sc.doe.gov/ascr/ ProgramDocuments/ProgDocs.html. View at Scopus 5. R. K. Jaiman, X. Jiao, P. H. Geubelle, and E. Loth, “Assessment of conservative load transfer for fluid-solid interface with non-matching meshes,” International Journal for Numerical Methods in Engineering, vol. 64, no. 15, pp. 2014–2038, 2005. View at Publisher · View at Google Scholar 6. C. A. Felippa, K. C. Park, and C. Farhat, “Partitioned analysis of coupled mechanical systems,” Computer Methods in Applied Mechanics and Engineering, vol. 190, no. 24-25, pp. 3247–3270, 2001. View at Publisher · View at Google Scholar 7. B. Roe, R. Jaiman, A. Haselbacher, and P. H. Geubelle, “Combined interface boundary condition method for coupled thermal simulations,” International Journal for Numerical Methods in Fluids, vol. 57, no. 3, pp. 329–354, 2008. View at Publisher · View at Google Scholar 8. J. T. Oden, “A general theory of finite elements II. Applications,” International Journal for Numerical Methods in Engineering, vol. 1, no. 3, pp. 247–259, 1969. 9. P. O'Hara, C. A. Duarte, and T. Eason, “Transient analysis of sharp thermal gradients using coarse finite element meshes,” Computer Methods in Applied Mechanics and Engineering, vol. 200, no. 5-8, pp. 812–829, 2011. View at Publisher · View at Google Scholar 10. P. O'Hara, C. A. Duarte, and T. Eason, “Generalized finite element analysis of three-dimensional heat transfer problems exhibiting sharp thermal gradients,” Computer Methods in Applied Mechanics and Engineering, vol. 198, no. 21-26, pp. 1857–1871, 2009. View at Publisher · View at Google Scholar 11. C. A. Duarte, I. Babuška, and J. T. Oden, “Generalized finite element methods for three-dimensional structural mechanics problems,” Computers and Structures, vol. 77, no. 2, pp. 215–232, 2000. View at Publisher · View at Google Scholar · View at MathSciNet 12. C. A. Duarte and D. J. Kim, “Analysis and applications of a generalized finite element method with global-local enrichment functions,” Computer Methods in Applied Mechanics and Engineering, vol. 197, no. 6-8, pp. 487–504, 2008. View at Publisher · View at Google Scholar 13. L. T. Zhang, G. J. Wagner, and W. K. Liu, “A parallelized meshfree method with boundary enrichment for large-scale CFD,” Journal of Computational Physics, vol. 176, no. 2, pp. 483–506, 2002. View at Publisher · View at Google Scholar 14. L. L. Thompson and P. M. Pinsky, “A space-time finite element method for structural acoustics in infinite domains part 1: formulation, stability and convergence,” Computer Methods in Applied Mechanics and Engineering, vol. 132, no. 3-4, pp. 195–227, 1996. 15. A. Przekop and S. A. Rizzi, “Nonlinear reduced order random response analysis of structures with shallow curvature,” AIAA Journal, vol. 44, no. 8, pp. 1767–1778, 2006. View at Publisher · View at Google Scholar 16. A. Przekop and S. A. Rizzi, “Dynamic snap-through of thin-walled structures by a reduced order method,” in Proceeding of the 47th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, pp. 1853–1863, Newport, RI, USA, May 2006, AIAA-2006-1745. 17. J. J. Hollkamp, R. W. Gordon, and S. M. Spottswood, “Nonlinear modal models for sonic fatigue response prediction: a comparison of methods,” Journal of Sound and Vibration, vol. 284, no. 3-5, pp. 1145–1163, 2005. View at Publisher · View at Google Scholar 18. B. Yang, M. P. Mignolet, and S. M. Spottswood, “Modeling of damage accumulation for Duffing-type systems under severe random excitations,” Probabilistic Engineering Mechanics, vol. 19, no. 1, pp. 185–194, 2004. View at Publisher · View at Google Scholar 19. X. Q. Wang, M. P. Mignolet, T. G. Eason, and S. M. Spottswood, “Nonlinear reduced order modeling of curved beams: a comparison of methods,” in Proceedings of the 50th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Palm Springs, Calif, USA, May 2009, AIAA-2009-2433. View at Scopus 20. N. J. Falkiewicz and C. E. S. Cesnik, “A reduced-order modeling framework for integrated thermo-elastic analysis of hypersonic vehicles,” in Proceedings of the 50th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Palm Springs, Calif, USA, May 2009, AIAA-2009-2308. 21. D. H. Baldelli, P. C. Chen, and J. Panza, “Unified aeroelastic and flight dynamic formulation via rational function approximations,” Journal of Aircraft, vol. 43, no. 3, pp. 763–772, 2006. View at Publisher · View at Google Scholar 22. D. Garretson, H. Mair, C. Martin, K. Sullivan, and J. Telchman, “Review of CFD capabilities,” DTIC Accension Number ADA 537587, Institute for Defense Analysis, 2005. 23. J. D. Shipman, S. Arunajatesan, P. A. Cavallo, and N. Sinha, “Dynamic CFD simulation of aircraft recovery to an aircraft carrier,” in Proceedings of the 26th AIAA Applied Aerodynamics Conference, Honolulu, Hawaii, USA, August 2008, AIAA-2008-6227. 24. R. W. Noack, “SUGGAR: a general capability for moving body overset grid assembly,” in Proceedings of the 17th AIAA Computational Fluid Dynamics Conference, Ontario, Canada, 2005, AIAA-2005-5117. 25. K. Nakahashi, Y. Ito, and F. Togashi, “Some challenges of realistic flow simulations by unstructured grid CFD,” International Journal for Numerical Methods in Fluids, vol. 43, no. 6-7, pp. 769–783, 2003. 26. J. E. Bozek, J. D. Hochhalter, M. G. Veilleux et al., “A geometric approach to modeling microstructurally small fatigue crack formation: I. Probabilistic simulation of constituent particle cracking in AA 7075-T651,” Modelling and Simulation in Materials Science and Engineering, vol. 16, no. 6, Article ID 065007, 2008. View at Publisher · View at Google Scholar 27. J. M. Emery, J. D. Hochhalter, P. A. Wawrzynek, G. Heber, and A. R. Ingraffea, “DDSim: a hierarchical, probabilistic, multiscale damage and durability simulation system—part I: methodology and Level I,” Engineering Fracture Mechanics, vol. 76, no. 10, pp. 1500–1530, 2009. View at Publisher · View at Google Scholar 28. Integrated Computational Materials Engineering–A Transformational Discipline for Improved Competitiveness and National Security, Committee on Integrated Computational Materials Engineering, National Research Council, The National Academies Press, Washington, DC, USA, 2008. 29. Z. Lu and Y. Liu, “Small time scale fatigue crack growth analysis,” International Journal of Fatigue, vol. 32, no. 8, pp. 1306–1321, 2010. View at Publisher · View at Google Scholar 30. W. Zhang and Y. Liu, “Investigation of incremental fatigue crack growth mechanisms using in situ SEM testing,” International Journal of Fatigue. In press. View at Publisher · View at Google 31. S. Pommier and M. Risbet, “Time derivative equations for mode I fatigue crack growth in metals,” International Journal of Fatigue, vol. 27, no. 10-12, pp. 1297–1306, 2005. View at Publisher · View at Google Scholar 32. P. J. Rabier, “Some remarks on damage theory,” International Journal of Engineering Science, vol. 27, no. 1, pp. 29–54, 1989. 33. W. June and C. L. Chow, “Subcritical crack growth in ductile fracture with continuum damage mechanics,” Engineering Fracture Mechanics, vol. 33, no. 2, pp. 309–317, 1989. 34. J. P. A. Pereira, Generalized finite element methods for three-dimensional crack growth simulations, Ph.D. thesis, University of Illinois, Urbana, Ill, USA, 2010. 35. J. T. Oden, T. Belytschko, I. Babuska, and T. J. R. Hughes, “Research directions in computational mechanics,” Computer Methods in Applied Mechanics and Engineering, vol. 192, no. 7-8, pp. 913–922, 2003. View at Publisher · View at Google Scholar · View at MathSciNet 36. H. T. Banks, “Remarks on uncertainty assessment and management in modeling and computation,” Mathematical and Computer Modelling, vol. 33, no. 1–3, pp. 39–47, 2001. View at Publisher · View at Google Scholar · View at MathSciNet 37. G. Stefanou, “The stochastic finite element method: past, present and future,” Computer Methods in Applied Mechanics and Engineering, vol. 198, no. 9-12, pp. 1031–1051, 2009. View at Publisher · View at Google Scholar 38. P. B. Nair and A. J. Keane, “Stochastic reduced basis methods,” AIAA Journal, vol. 40, no. 8, pp. 1653–1664, 2002. 39. S. Acharjee and N. Zabaras, “A non-intrusive stochastic Galerkin approach for modeling uncertainty propagation in deformation processes,” Computers and Structures, vol. 85, no. 5-6, pp. 244–254, 2007. View at Publisher · View at Google Scholar 40. X. F. Xu, “A multiscale stochastic finite element method on elliptic problems involving uncertainties,” Computer Methods in Applied Mechanics and Engineering, vol. 196, no. 25-28, pp. 2723–2736, 2007. View at Publisher · View at Google Scholar · View at MathSciNet 41. A. Nouy, A. Clément, F. Schoefs, and N. Moës, “An extended stochastic finite element method for solving stochastic partial differential equations on random domains,” Computer Methods in Applied Mechanics and Engineering, vol. 197, no. 51-52, pp. 4663–4682, 2008. View at Publisher · View at Google Scholar 42. D. Post, “The Opportunities and Challenges for Computational Science and Engineering,” Julich, Germany, 2007. 43. S. A. Morton, D. R. McDaniel, D. R. Sears, B. Tillman, and T. R. Tuckey, “Kestrel—a fixed wing virtual aircraft product of the CREATE program,” in Proceedings of the 47th AIAA Aerospace Sciences Meeting, Orlando, Fla, USA, January 2009, AIAA 2009-338. View at Scopus 44. P. C. Chen, D. H. Baldelli, and J. Zeng, “Dynamic flight simulation (DFS) tool for nonlinear flight dynamic simulation including aeroelastic effects,” in Proceedings of the AIAA Atmospheric Flight Mechanics Conference and Exhibit, Honolulu, Hawaii, USA, 2008, AIAA 2008-6376. 45. E. H. Glaessgen, E. Saether, S. W. Smith, and J. D. Hochhalter, “Modeling and characterization of damage processes in metallic materials,” in Proceedings of the 52nd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Denver, Colo, USA, 2011, AIAA 2011-2177. 46. D. Kinard, “The Digital Thread–Key to F-35 Joint Strike Fighter Affordability,” Aerospace Manufacturing and Design, 2010, http://www.onlineamd.com/
{"url":"http://www.hindawi.com/journals/ijae/2011/154798/","timestamp":"2014-04-19T08:05:00Z","content_type":null,"content_length":"104143","record_id":"<urn:uuid:20284a0b-5272-42e8-bdc1-acb37a21dd8f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Are Friedman's indepence results natural? Andreas Weiermann weiermann at math.uu.nl Tue Jan 10 18:32:43 EST 2006 Dear all, some days ago Harvey commented a bit on the naturality of his statements and he referred a bit to my work. So I feel encouraged to comment a bit on this issue from my point of view. >b. I found that Weiermann (and others) studied, very carefully for its own >sake, just what happens when you look at finite sequences of trees and other >objects with various bounds on rates of growth. They found various exciting >threshold phenomena, and this is now a little bit of a cottage industry. If >Weiermann and company would have taken Feferman's negative comments >seriously, I don't see how they would have developed this mathematically >beautiful and intricate threshold theory for growth rates. Weiermann has >written a number of interesting papers and given a number of plenary talks >on this. First of all, yes, I agree with Harvey. His FKT is natural in my opinion (and I know a lot of more people who agree on this too). In particular I consider results which lead to a rich and intriguing mathematics as natural without questioning them further. Now what is the intriguing math behind ordinals or trees? Let me give two examples for small ordinals: To $\alpha=\omega^{a_1}+\cdots+\omega^{a_n}$ in Cantor normal form associate the Goedel number $gn(\alpha)=p_1^{a_1}\cdot\ldots\cdot p_n^{a_n}$ where $0$ gets Goedel number $gn(0)=1$. This is a natural coding, as I hope. Let $c(n)=\#\{\alpha<\omega^\omega:gn(\alpha)\leq n\}$ and $c_d(n)=\#\{\alpha<\omega^d:gn(\alpha)\leq n\}$. Then $\log(c(n)\sim \pi\cdot\frac{2}{3}\sqrt{\frac{\log(n)}{\log\log(n))}}$ and $\log(c_d(n)\sim \frac{1}{(d!)^2}({\frac{\log(n)}{\log\log(n))}})^d$ I showed these things to Jaap Korevaar in Amsterdam who is one of the grandmasters in Tauberian theory and he got excited. Also Anatoly Vershik from St. Petersburg liked these and people in analytic combinatorics liked these too. Such analytic results arise naturally in the context of statements like FKT. Now let us have a look at some places in the literature: First, there is a recent book by Burris on numbertheoretic densities and logical limit laws. If one scans through it then it appears that ordinals form an additive number system and the theory applies. That's appealing and leads (with results of Woods) to logical limit laws for ordinals (joint with Woods). Second, there are papers by Parameswaran and Kohlbecker in TAMS on Tauberian results. Well these are tailor made for ordinal counting. Third, after scanning through Ramanujans collected papers one finds the folklore result mentioned above (with a proof using Tauberian theorems of exponential type). Finally scanning through the book on regular variation by Bingham et al. one solves the exercise above easily (but without it the exercise is not that obvious). Forth, take a look at Flajolet's and Sedgewick's online book on analytic combinatorics. All the stuff there on random trees applies to ordinal notations. Natural parameters of random ordinals will usually obey a Gaussian law. Moreover countour processes for ordinal terms are related to Brownian excursions. I believe that these phenomena promise further interesting Now it has been critizised that FKT till yet did not have applications within math. I conjecture that they will come. Anyway, it might be good to give a concrete application in logic: In September 2005 I visited Alan Woods in Perth and I met there Martin Bunder there at the AMS meeting. He asked me on the following problem (Miniaturized Dickson's Lemma). Let $(a^i)_{i=1}^M$ be a sequence of $k$-tuples of non negative integers such that 1) $a^{i+1}\leq a^i+1$ for all $i<M $ and $l\leq k$, 2) For all $i<j\leq M$ there is $l\leq k$ with $a^i_l>a^j_l$. How large can $M$ become as a function of $a^1$? This is a problem related to relevance logic and appears there naturally. According to Martin Bunder, George Szekeres had worked for some time on it. Armed with knowledge on Friedman style miniaturizations I solved it and proved Ackermannian lower bounds on M by direct calculations. Of course, a similar result holds for Higman's Lemma too. At the moment some of my coworkers and I also consider Ramsey theory in general and we expect rich and intriguing output. In particular Friedman's Ramseyan statements and the canonical Ramsey theorem will come with a nice phase transition. To sum up, for me it's in particular the intriguing underlying mathematics which makes Friedman's miniaturizations and independence results (for PA) natural and I still believe that the mathematical potential of these are underestimated to a large extent. Best whishes, Andreas Weiermann More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2006-January/009550.html","timestamp":"2014-04-16T10:14:53Z","content_type":null,"content_length":"7227","record_id":"<urn:uuid:70a7a77c-8068-473b-a74f-39671e09953a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
June 21, 2012 - While I have a sample Risk-Parity portfolio coming out early tomorrow morning, I want to rush out a brief description of how risk is calculated. This not for the mathematically squeamish, but the concept is not difficult if one can get past one step in the logic. Don't be surprised if this concept does not alter how portfolios are put together here at ITA Wealth Management. Let me explain by first beginning with a reference to a paper (Risk Parity Allocation by Edward Qian) that brought this idea to my attention. A standard portfolio breakdown between stocks and bonds is 60%/40%. Most investors will argue this is a well-diversified and somewhat conservative portfolio. It is certainly more conservative than most of the ITA Wealth Management portfolios. But is the 60/40 mix a conservative portfolio? I just checked my data sources and found that the standard deviation (SD) of stocks over the past five years is +/-16% and the SD for bonds is +/-3%. However, when one takes variance into account, we must square each number so the 16% has a variance of 256 and for bonds it is 3 x 3 or 9. In terms of variance, stocks are 28 (256/9 = 28.4 or 28 when rounded) times riskier than bonds. That is a huge difference. Even if stocks carried a standard deviation of 15% and bonds 5%, in terms of variance, stocks are nine times riskier than bonds. To borrow an analogy from Edward Qian, if we go back to the 60/40 stock to bond split, we have six stock eggs and four bond eggs. To calculate the true risk to stocks we find we have 172 (28 x 6 + 4) eggs in total. One hundred sixty eight (168) out of 172 is approximately 98%. Very close to 100% of the 60/40 portfolio of risk is carried by the stock portion of the portfolio. I need to begin rethinking how a portfolio is put together and Platinum readers would do well to pay attention. The 70/30 or even 80/20 mix is fine when stocks are in the ascendency. But let another bear market strike, and the high stock to bond portfolios are in for another two to three sigma shakedown. That is exactly what we want to avoid. How to pull together the ideas of Risk-Parity and the ITA Risk Reduction model is a worthy project.
{"url":"http://itawealthmanagement.com/2012/06/21/","timestamp":"2014-04-17T15:47:51Z","content_type":null,"content_length":"30902","record_id":"<urn:uuid:ec30085a-417d-4424-a244-185a77132418>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
Cumberland, RI Algebra 1 Tutor Find a Cumberland, RI Algebra 1 Tutor ...I make sure they have all they need to succeed at their goal. My main main areas of interest are: Elementary School and Young Students: Math and Science. I teach science in general and computers and computer science. 47 Subjects: including algebra 1, chemistry, reading, calculus ...In high school, I tutored other high school students in math and physics. I currently teach social studies to third graders as an after school program. Over Winter Break, I also had an education-related internship for a company called Kinvolved which works to improve attendance and graduation rates in schools in underserved communities. 20 Subjects: including algebra 1, physics, calculus, French ...In addition to being a math teacher, I have also been a Peace Corps volunteer in the country of Lesotho in southern Africa. In retrospect, I think my career as a teacher began when I was a student in seventh grade. I had a math teacher who didn't always understand the questions students were asking; nor did she explain everything in a way students understood. 14 Subjects: including algebra 1, chemistry, calculus, geometry ...As an undergraduate, I was a math major at UMass Boston and graduated with high honors. From 2008 to 2012 I was a tutor in the academic support services office at UMass. I primarily tutored probability and statistics, single and multi variable calculus, and college algebra. 14 Subjects: including algebra 1, calculus, geometry, GRE ...My experience as a tutor would working after school with students struggling in mathematical classes for 2 years. I also had a younger brother who always had trouble in his classes. I'm a very patient person who can explain subjects or equations in an easy way. 13 Subjects: including algebra 1, French, algebra 2, precalculus Related Cumberland, RI Tutors Cumberland, RI Accounting Tutors Cumberland, RI ACT Tutors Cumberland, RI Algebra Tutors Cumberland, RI Algebra 2 Tutors Cumberland, RI Calculus Tutors Cumberland, RI Geometry Tutors Cumberland, RI Math Tutors Cumberland, RI Prealgebra Tutors Cumberland, RI Precalculus Tutors Cumberland, RI SAT Tutors Cumberland, RI SAT Math Tutors Cumberland, RI Science Tutors Cumberland, RI Statistics Tutors Cumberland, RI Trigonometry Tutors
{"url":"http://www.purplemath.com/Cumberland_RI_algebra_1_tutors.php","timestamp":"2014-04-19T09:53:43Z","content_type":null,"content_length":"24271","record_id":"<urn:uuid:f9202f58-9536-44a6-86eb-4cca2379c48e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Can Hölder's Inequality be strengthened for smooth functions? up vote 20 down vote favorite Is there an $\epsilon>0$ so that for every nonnegative integrable function $f$ on the reals, $$\frac{\| f \ast f \|_\infty \| f \ast f \|_1}{\|f \ast f \|_2^2} > 1+\epsilon?$$ Of course, we want to assume that all of the norms in use are finite and nonzero, and $f\ast f(c)$ is the usual convolved function $\int_{-\infty}^{\infty} f(x)f(c-x)dx$. The applications I have in mind have $f$ being the indicator function of a compact set. A larger framework for considering this problem follows. Set $N_f(x):=\log(\| f \|_{1/x})$. Hölder's Inequality, usually stated as $$\| fg \|_1 \leq \|f\|_p \|g\|_q$$ for $p,q$ conjugate exponents, becomes (with $f=g$) $N_f(1/2+x)+N_f(1/2-x)\geq 2N_f(1/2)$. In other words, Hölder's Inequality implies that $N_f$ is convex at $x=1/2$. The generalized Hölder's Inequality gives convexity on $[0,1] It is possible for $N_f$ to be linear, but only if $f$ is a multiple of an indicator function. What I am asking for is a quantitative expression of the properness of the convexity when $f$ is an Examples: The ratio of norms is invariant under replacing $f(x)$ with $a f(cx-d)$, provided that $a>0$ and $a,c,d$ are reals. This means that if $f$ is the interval of an indicator function, we can assume without loss of generality that it is the indicator function of $(-1/2,1/2)$. Now, $f\ast f(x)$ is the piecewise linear function with knuckles at $(-1,0),(0,1),(1,0)$. Therefore, $\|f\ast f\|_ \infty=1$, $\|f \ast f\|_1 = 1$, $\|f \ast f \|_2^2 = 2/3$, and the ratio of norms is $3/2$. Gaussian densities make another nice example because the convolution is easy to express. If $f(x)=\exp(-x^2/2)/\sqrt{2\pi}$, then $f\ast f(x) = \exp(-x^2/4)/\sqrt{4\pi}$, and so $\|f\ast f\|_\infty = 1/\sqrt{4\pi}$, $\|f\ast f\|_1=1$, and $\|f \ast f\|_2^2=1/\sqrt{8\pi}$. The ratio in question is then just $\sqrt{2}$. This problem was considered (without result) by Greg Martin and myself in a series of papers concerning generalized Sidon sets. We found this ``nice'' example: $f(x)=1/\sqrt{2x}$ if $0 < x < 1/2$, $f (x)=0$ otherwise. Then $f\ast f(x) = \pi/2$ for $0 < x < 1/2$ and $f\ast f(x) = (\pi-4\arctan(\sqrt{2x-1}))/2$ for $1/2 < x < 1$, and $f\ast f$ is 0 for $x$ outside of $(0,1)$. We get $\|f \ast f\|_\ infty = \pi/2$, $\|f \ast f\|_1 = 1$, $\|f \ast f \|_2^2 = \log 4$, so the norm ratio is $\pi/\log(16) \approx 1.133$. In this paper, Vinuesa and Matolcsi mention some proof-of-concept computations that show that $\pi/\log(16)$ is not extremal. fourier-analysis ca.analysis-and-odes fa.functional-analysis What happens for intervals? I ask this, because I expect the answer to be obtainable by an easy computation. In particular the limit "interval length to 0" should be relevant. – Helge Mar 30 '11 at 19:15 If $f$ is the indicator function of an interval of length $I$, then $\|f \ast f\|_\infty = I$, $\|f\ast f\|_1 = I^2$ and $\|f \ast f\|_2^2 = 2/3 I^3$. The ratio in question, then, is always 3/2, independent of the length of the interval – Kevin O'Bryant Mar 30 '11 at 20:28 sorry, am not that familiar with the notation. what happens when $f$ is the constant function? – Suvrit Mar 30 '11 at 20:44 You might add gaussians as another example, with a slight better ratio ($\sqrt{2}$( – Piero D'Ancona Mar 30 '11 at 21:54 1 I was going to mention Sidon sets, but realized that this question is probably motivated by something similar to it. :) – Willie Wong Mar 30 '11 at 22:27 show 5 more comments 3 Answers active oldest votes Some initial thoughts: • the question is basically asking whether $f*f$ can be close to a constant multiple $c1_E$ of an indicator function (these are the only non-negative functions for which Holder is sharp). • the hypothesis that $f$ is non-negative is going to be crucial. Note that any Schwartz function can be expressed as $f*f$ for some complex-valued f by square-rooting the Fourier transform, and so by approximating an indicator function by a Schwartz function we see that there is no gain. up vote • On the other hand the hypothesis that $f$ is an indicator function (or a constant multiple thereof) is only of limited utility, because any non-negative function in $L^1$ can be 10 down expressed as the weak limit of constant multiples of indicator functions (similarly to how a grayscale image can be rendered using black and white pixels in the right proportion; the vote indicator is of a random union of small intervals whose intensity is proportional to $f$). • If $f*f$ is close to $c1_E$, and we normalise $c=1$ and $|E|=1$, then $f$ has $L^1$ norm close to $1$ and the Fourier transform has $L^4$ norm close to $1$ (i.e. the Gowers $U^2$ norm is close to $1$). Using the quantitative idempotent theorem of Green and Sanders we also see that the $L^2$ norm of $f$ (which controls the Wiener norm of $f*f$) is much larger than $1$. But it's not clear to me where to go next from here. Terry, instead of using the idempotent theorem I think a result of Rudin, improved by Saeki is more relevant: MR0225102 (37 #697) Saeki, Sadahiro On norms of idempotent measures. Proc. Amer. Math. Soc. 19 1968 600–602. In R, where there are no interesting compact subgroups, this will tell you that $\|\hat{1}_E\| > 1.2$. WIll have to think about the details. – Ben Green Mar 31 '11 at 20:21 Hmm, it's not so clear. The problem is that it seems to be hard to say much about f. For example, f could consist of spikes of height $N$ and width $1/N^2$ about a Sidon set of size about $N$; then $f \ast f$ will look a bit like the characteristic function of a union of $N$ intervals of width $\sim 1/N$; call this set $E$. Unfortunately I can't conclude anything useful about $\Vert \hat{1}_E \Vert_1 = \Vert f \Vert_2^2$ being small. – Ben Green Mar 31 '11 at 20:58 Let $A$ be Sidon set with $|A|$, and take $f(x)=\sum_{a\in A} 1_{(a-1/2,a+1/2)(x)$, so that $f*f$ is piecewise linear with knuckles at $(k,r(k))$, where $r(k)$ is the number of reps of 1 the integer $k$ as a sum of two elements of $A$. Then $\|f*f\|_{\infty}=2$ (by Sidon-ness), and $\|f*f\|_1=n$ (by $|A|=\sqrt{n}$). But $\|f*f\|_2^2$ depends on the number of intervals in $A+A$, or (equivalently) on the sum $\sum_k r(k)r(k+1)$. This is a difference between the continuous and discrete settings: in the discrete setting we always have $\|f*f\|_2^2=2|A|^2-|A| $, and norm ratio would tend to 1. – Kevin O'Bryant Mar 31 '11 at 21:18 add comment Reminds me a bit of Talagrand's 2nd $1000 conjecture, a special case of which is the following: Let $f$ be a nonnegative function on the reals and let $g = U_t f$, where $U_t$ is the Ornstein-Uhlenbeck semigroup and $t$ is some fixed positive number; say, $t = 1$. Then Markov's up vote 2 inequality is not tight for $g$; i.e., $\Pr[g > c \mathrm{E}[g]] = o(c)$, where the probability is with respect to the Gaussian distribution. down vote I'm pretty sure this special case is hard enough that Talagrand would give you a fraction of the $1000 for it. add comment Not an answer, but rather an extended comment. Consider the following problem. Given a set of integers $A\subset [1,N]$, denote by $\nu(n)$ the number of representations of $n$ as a sum of two elements of $A$. Thus, $\nu=1_A\ast1_A$ up to normalization, and, trivially, we have $$ \sum_n \nu^2(n) \le |A|^2 \max_n \nu(n). $$ Does there exist an absolute constant $\varepsilon>0$ such that if $$ \sum_n \nu^2(n) > (1-\varepsilon) |A|^2 \max_n\nu(n), up vote 1 $$ then $\alpha:=|A|/N\to 0$ as $N\to\infty$? (The flavor of this question to me is as follows: we want to draw a conclusion about a finite set, given that its ``additive energy'' is large down vote -- but not as large as in Balog-Szemeredi-Gowers.) What is the relation between this and the original problem? Although I cannot establish formally equivalence in both directions, it is my understanding that the two problems are ``essentially equivalent''; at least, if in the original problem we confine ourselves to indicator functions of open sets. add comment Not the answer you're looking for? Browse other questions tagged fourier-analysis ca.analysis-and-odes fa.functional-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/60087/can-holders-inequality-be-strengthened-for-smooth-functions?sort=newest","timestamp":"2014-04-20T01:11:44Z","content_type":null,"content_length":"73545","record_id":"<urn:uuid:2fe7acd9-fd89-482d-96e5-d7c20f6ef8a2>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
Bergenfield Math Tutor ...I have a bachelor's degree in physics. I have tutored high school geometry both privately and for the Princeton Review. I have a bachelor's degree in physics. 20 Subjects: including ACT Math, algebra 1, algebra 2, SAT math ...I have also coordinated and taught SAT Math classes. I have experience working with students ranging from 4th-12th grades, including students diagnosed with ADHD. Simply put, my style of teaching is personalized so that I can help each individual student reach his/her personal goals in the coursework. 24 Subjects: including algebra 1, ACT Math, geometry, SAT math ...I received by Bachelors in Math from Montclair State University. Now I an getting my Masters in Education from Montclair State University. I have tutored students before in Mathematics and success in math is all about practice. 12 Subjects: including calculus, vocabulary, statistics, SAT math ...My education includes a bachelor's in mathematics and physics, as well as a master's in pure and applied mathematics. I am currently a college professor with a full time job at a community college. My style of teaching is to treat the student as an equal. 7 Subjects: including discrete math, algebra 1, algebra 2, calculus ...One of my electives was tutoring for the Advancement Via Individual Determination (AVID) program that my high school offered. I am currently tutoring students who go to some of the top prep schools in NYC, including Trinity and Collegiate. I enjoy working with students and helping them reach their academic potential. 16 Subjects: including SAT math, algebra 1, algebra 2, biology
{"url":"http://www.purplemath.com/bergenfield_math_tutors.php","timestamp":"2014-04-19T20:05:17Z","content_type":null,"content_length":"23570","record_id":"<urn:uuid:9ff83ee1-b041-43a9-913b-80917372c7d2>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: st: interaction dummy or separate regression Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: st: interaction dummy or separate regression From "Khieu, Hinh" <Hdkhieu@usi.edu> To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu> Subject RE: st: interaction dummy or separate regression Date Wed, 28 Sep 2011 17:11:08 -0500 Thanks again for your prompt help. Chg in Y is capital expenditures. I may need to run the model again on something else. That is why I put Y for generalization. I have several definitions of abnormal investment: 1/as in my original email, abnormal are those values that exceed 200% of industry averages for the past 3 years, 2/abnormal = residuals from a regression of a normal investment equation. As you can see, both measures are like Y = A minus B. And Y could be positive and negative. Only positive Y's are abnormal and the negative Y's are irrelevant for me. If stata code should be: Xtreg Y debt equity if small firm==0, fe robust The only way for me to get Y to be abnormal is either to drop the negative Y or to add the second condition in the If statement "& Y>0". If I use the residual values from definition 2 of the measure, I don't know how appropriate it is for the following reason: residuals are forecast error and if something else in the second stage can explain the forecast error, it should be in the first stage too. This logic leads me to use just Investment (which includes both normal and abnormal values) as dependent variable. Using it that way and including the dummy or separating the sample into two, according to you, would be problematic. I wonder if you have any suggestions. I would appreciate them. Thank you so much. From: owner-statalist@hsphsun2.harvard.edu [owner-statalist@hsphsun2.harvard.edu] On Behalf Of Austin Nichols [austinnichols@gmail.com] Sent: Wednesday, September 28, 2011 12:34 PM To: statalist@hsphsun2.harvard.edu Subject: Re: st: interaction dummy or separate regression Khieu, Hinh <Hdkhieu@usi.edu>L I already ruled out option 2, as selection on the dependent variable is not allowed. Option 1 is okay, but I would suggest you explore an alternative link, as the explanatory variables likely have a very nonlinear relationship with Y. Imagine the cube root of y is a linear function of x, and you regress y on x, in which case extreme values of y will seem to have a very different relationship with x. See the help file for -glm- for starters. You can also select on X, as I mentioned, which may illuminate the appropriate link, if you can construct a reasonable piecewise linear approximation. Better advice may follow if you are more explicit about what you are modeling. I.e. what is Y, what is the data, etc. On Wed, Sep 28, 2011 at 1:19 PM, Khieu, Hinh <Hdkhieu@usi.edu> wrote: > Austin, > Thank you very much for your note. I have a feeling it is not right, but not what is not right and you answered it. So, to test whether debt or equity is used to finance abnormal Y, there are only two ways: > 1. put abnormal Y as dependent variable and drop all dummy and interaction terms > 2. still keep Chg in Y as dependent variable but run one regression based on Abnormal Y observations alone and another regression based on Other Y. > I wonder if you can do me a favor by commenting my two solutions above. > Thank you very much. > Regards, > Hinh > ________________________________________ > From: owner-statalist@hsphsun2.harvard.edu [owner-statalist@hsphsun2.harvard.edu] On Behalf Of Austin Nichols [austinnichols@gmail.com] > Sent: Wednesday, September 28, 2011 12:11 PM > To: statalist@hsphsun2.harvard.edu > Subject: Re: st: interaction dummy or separate regression > Khieu, Hinh <Hdkhieu@usi.edu> : > You are not going to get unbiased estimates of any of those coefs, if > that's what you mean. You are not allowed to select on Y, nor include > a transformation of it as a regressor, and I strongly recommend you > explore what you are estimating using a simulation on generated data > where you know the true effects (a1, a2, etc.). You are allowed to > select on an exogenous X variable, but not on "abnormal" Y. > On Wed, Sep 28, 2011 at 1:00 PM, Khieu, Hinh <Hdkhieu@usi.edu> wrote: >> Dear statalist members, >> I have the following model and I am not sure if there is an econometric issue with it. I would appreciate any amount of help. Change in Y = a1*growth opportunities + a2*profit + a3*debt + a4*equity + a5*dummy (=1 if change in Y is abnormally high, zero otherwise) + a6 * debt * dummy + a7 * equity * dummy, where abnormally high is defined to be whenever change in Y is greater than 2 times the industry average of Y over the last 3 years (t, t-1, and t-2). >> I run fixed effects regression with firm and year dummies on the above model for 2 groups of firms: large firms versus small firms. My question is: is there any mechanical or econometric problem with using the dummy for abnormal Y and its interaction with debt and equity? I know I can split the sample into abnormal Y and normal Y and run two separate regressions. But I want to know specifically if the model above is problematic from an econometric perspective. What if I drop the dummy and keep only the interactions? * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-09/msg01294.html","timestamp":"2014-04-19T04:26:18Z","content_type":null,"content_length":"13718","record_id":"<urn:uuid:3afdb3bb-7975-4db9-9c86-29d1c8c280fa>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Definition of the definite integral question: • one year ago • one year ago Best Response You've already chosen the best response. Using the definition of the definite integral, compute: \[\int\limits_{-4}^{2}(3x^2+12x+20)dx\] Best Response You've already chosen the best response. Is the definition of a definition integral FTC or the definition of an integral: lim (x,y)-> (0,0) and on and on? Best Response You've already chosen the best response. take the anti derivative of 3x^2 + 12x + 20 is x^3 +6x + 20x then you plug in 2 and then -4 (2)^3 + 6(2) + 20(2) - [ -4^3 + 12(-4) + 20(-4) ] 8 + 12 + 40 - [ -64 - 48 - 80 ] 60 - [ - 192] = 252 Best Response You've already chosen the best response. Definition of integral. @jayz657 Thanks but I want an explanation on this concept. Best Response You've already chosen the best response. I can calculate the anti derivative fine but I have to use Riemann sums @jayz657 . Best Response You've already chosen the best response. Just an explanation of some sort for Riemann sums would be fine. I can do the rest I am sure. Best Response You've already chosen the best response. |dw:1353040377471:dw| this is the riemann sum, you are drawing an infinite amount of rectangles under the curve here you know the length is change of x and the height is f(x) and you sum up each rectangle to get the area under the curve so will will get this \[\sum_{-4}^{2} f(x)\Delta x\] Best Response You've already chosen the best response. Yes I know the fundamental concept. But how would I exactly apply it? Best Response You've already chosen the best response. when you sum up the infinite amount of rectangles you will get the integral there Best Response You've already chosen the best response. You integrate the equation and then you get a function. That function will be in terms of f(x). The top number in the definite integral is b and the lower is a. f(b)-f(a) is your answer. Jayz got Best Response You've already chosen the best response. Yeah but How would I exactly sum up an infinite number of rectangles for my given function? Best Response You've already chosen the best response. Do you want to see how to do this problem or the theory behind it? Best Response You've already chosen the best response. @malical : I know how to find an antiderivative. I just have trouble applying riemann sums. Best Response You've already chosen the best response. Why are Riemann sums important? Best Response You've already chosen the best response. The question specifically says to use Riemann sums. Best Response You've already chosen the best response. Never mind. I got it :) . Best Response You've already chosen the best response. the change of x is the rate of change of each rectangle and rate of change is related to the derivative, dx so in this equation you can just draw as many rectangles as you want under the curve using a delta x width and using f(x) as your height heres an exmaple doing it the long way lets say i make 4 rectangles with length 3 |dw:1353040955063:dw| Best Response You've already chosen the best response. Yeah I got it thanks :) . Best Response You've already chosen the best response. ok np Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50a5bef6e4b0329300a9171b","timestamp":"2014-04-18T23:56:05Z","content_type":null,"content_length":"106048","record_id":"<urn:uuid:5af912e5-afd0-4b48-85fb-d5ea1256e2f6>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
Electromagnetic momentum Next: Momentum conservation Up: Electromagnetic energy and momentum Previous: Energy conservation We have seen that electromagnetic waves carry energy. It turns out that they also carry momentum. Consider the following argument, due to Einstein. Suppose that we have a railroad car of mass 54). Suppose that electromagnetic radiation of total energy moves in the opposite direction to the direction of propagation of the radiation. In fact, if the car moves by a distance It is assumed that But, what actually causes the car to move? If the radiation possesses momentum Thus, the momentum carried by electromagnetic radiation equals its energy divided by the speed of light. The same result can be obtained from the well-known relativistic formula relating the energy photons. Thus, for individual photons, so the same must be true of electromagnetic radiation as a whole. If follows from Eq. (1046) that the momentum density It is reasonable to suppose that the momentum points along the direction of the energy flow (this is obviously the case for photons), so the vector momentum density (which gives the direction, as well as the magnitude, of the momentum per unit volume) of electromagnetic radiation is Thus, the momentum density equals the energy flux over Of course, the electric field associated with an electromagnetic wave oscillates rapidly, which implies that the previous expressions for the energy density, energy flux, and momentum density of electromagnetic radiation are also rapidly oscillating. It is convenient to average over many periods of the oscillation (this average is denoted where the factor Since electromagnetic radiation possesses momentum then it must exert a force on bodies which absorb (or emit) radiation. Suppose that a body is placed in a beam of perfectly collimated radiation, which it absorbs completely. The amount of momentum absorbed per unit time, per unit cross-sectional area, is simply the amount of momentum contained in a volume of length i.e., radiation pressure is given by So, the pressure exerted by collimated electromagnetic radiation is equal to its average energy density. Consider a cavity filled with electromagnetic radiation. What is the radiation pressure exerted on the walls? In this situation, the radiation propagates in all directions with equal probability. Consider radiation propagating at an angle 1054), except that it is weighted by the average of Clearly, the pressure exerted by isotropic radiation is one third of its average energy density. The power incident on the surface of the Earth due to radiation emitted by the Sun is about Here, the radiation is assumed to be perfectly collimated. Thus, the radiation pressure exerted on the Earth is minuscule (one atmosphere equals about gas tail) consists of ionized gas, and is swept along by the solar wind (a stream of charged particles and magnetic field-lines emitted by the Sun). The other (called the dust tail) consists of uncharged dust particles, and is swept radially outward from the Sun by radiation pressure. Two separate tails are observed if the local direction of the solar wind is not radially outward from the Sun (which is quite often the case). The radiation pressure from sunlight is very weak. However, that produced by laser beams can be enormous (far higher than any conventional pressure which has ever been produced in a laboratory). For instance, the lasers used in Inertial Confinement Fusion (e.g., the NOVA experiment in Lawrence Livermore National Laboratory) typically have energy fluxes of Next: Momentum conservation Up: Electromagnetic energy and momentum Previous: Energy conservation Richard Fitzpatrick 2006-02-02
{"url":"http://farside.ph.utexas.edu/teaching/em/lectures/node90.html","timestamp":"2014-04-17T15:26:36Z","content_type":null,"content_length":"21388","record_id":"<urn:uuid:ee1dc617-d4fb-4744-90d8-80fae1a45af1>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
octpch — Converts a pitch-class value to octave-point-decimal. octpch (pch) (init- or control-rate args only) where the argument within the parentheses may be a further expression. octpch and its related opcodes are really value converters with a special function of manipulating pitch data. Data concerning pitch and frequency can exist in any of the following forms: Table 19. Pitch and Frequency Values Name Abbreviation octave point pitch-class (8ve.pc) pch octave point decimal oct cycles per second cps Midi note number (0-127) midinn The first two forms consist of a whole number, representing octave registration, followed by a specially interpreted fractional part. For pch, the fraction is read as two decimal digits representing the 12 equal-tempered pitch classes from .00 for C to .11 for B. For oct, the fraction is interpreted as a true decimal fractional part of an octave. The two fractional forms are thus related by the factor 100/12. In both forms, the fraction is preceded by a whole number octave index such that 8.00 represents Middle C, 9.00 the C above, etc. Midi note number values range between 0 and 127 (inclusively) with 60 representing Middle C, and are usually whole numbers. Thus A440 can be represented alternatively by 440 (cps), 69 (midinn), 8.09 (pch), or 8.75 (oct). Microtonal divisions of the pch semitone can be encoded by using more than two decimal places. The mnemonics of the pitch conversion units are derived from morphemes of the forms involved, the second morpheme describing the source and the first morpheme the object (result). Thus cpspch(8.09) will convert the pitch argument 8.09 to its cps (or Hertz) equivalent, giving the value of 440. Since the argument is constant over the duration of the note, this conversion will take place at i-time, before any samples for the current note are produced. By contrast, the conversion cpsoct(8.75 + k1) which gives the value of A440 transposed by the octave interval k1. The calculation will be repeated every k-period since that is the rate at which k1 The conversion from pch, oct, or midinn into cps is not a linear operation but involves an exponential process that could be time-consuming when executed repeatedly. Csound now uses a built-in table lookup to do this efficiently, even at audio rates. Because the table index is truncated without interpolation, pitch resolution when using one of these opcodes is limited to 8192 discrete and equal divisions of the octave, and some pitches of the standard 12-tone equally-tempered scale are very slightly mistuned (by at most 0.15 cents). Here is an example of the octpch opcode. It uses the file octpch.csd. Example 539. Example of the octpch opcode. See the sections Real-time Audio and Command Line Flags for more information on using command line flags. ; Select audio/midi flags here according to platform ; Audio out Audio in -odac -iadc ;;;RT audio I/O ; For Non-realtime ouput leave only the line below: ; -o octpch.wav -W ;;; for file output any platform ; Initialize the global variables. sr = 44100 kr = 4410 ksmps = 10 nchnls = 1 ; Instrument #1. instr 1 ; Convert a pitch-class value into an ; octave-point-decimal value. ipch = 8.09 ioct = octpch(ipch) print ioct ; Play Instrument #1 for one second. i 1 0 1 Its output should include a line like this: instr 1: ioct = 8.750
{"url":"http://www.csounds.com/manual/html/octpch.html","timestamp":"2014-04-21T07:06:30Z","content_type":null,"content_length":"11825","record_id":"<urn:uuid:d1d2abe7-f4a7-4a26-b725-88b55317cea9>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
Tide Coeff Tidal coefficients is a system primarily used in France and therefore featured in many French pilot books, web sites and harbour notices. Its a tabular system that depicts the 'size' or 'magnitude' of the expected tide at a simple glance. It eliminates the need of having to look up and calculate the range of tides and the need to determine whether its neaps or springs or anywhere in between. The coefficients usually range from anywhere between 20 and 120 and a good typical guide to use is: 20 – very small neap 45 – mean neap 70 – average tide 95 – mean spring 120 – very big spring As an example of its uses, suppose the sill at some French Marina opens at HW±2 when the coefficient is greater than 70. This means that with an average sort of tide, that’s when you can expect to enter. It follows therefore that if the tide coefficient tends towards the neap spectrum, the marina gate will open later. The coefficient can also be used as a guide for tidal stream rates. The coefficient tables are valid for all areas featured in SailingAlmanac.
{"url":"http://www.sailingalmanac.com/Almanac/Tides/Coefficients/tidecoeff.html","timestamp":"2014-04-16T14:04:52Z","content_type":null,"content_length":"10938","record_id":"<urn:uuid:f71165d9-2739-4e28-aa3c-a13a8bebbe10>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
if (x.equals("Hello")) or "World" August 18th, 2013, 05:49 PM if (x.equals("Hello")) or "World" if (x = y != z) tests if x is equal to y OR z. How do I do this for .equals? if (x.equals("Hello" or "World") August 18th, 2013, 05:59 PM Re: if (x.equals("Hello")) or "World" That is not how to compare multiple values in Java. What that code does is assign x the value of y, then checks to make sure that x is not equal to z. You need to use the logical or operator || to chain together multiple conditions using logical or. Code java: if(x == y || x == z) The same logical code statement applies to .equals: Code java: if(x.equals(y) || x.equals(z)) August 18th, 2013, 06:10 PM Re: if (x.equals("Hello")) or "World" Thanks :) Helped me!
{"url":"http://www.javaprogrammingforums.com/%20loops-control-statements/31033-if-x-equals-hello-world-printingthethread.html","timestamp":"2014-04-16T16:55:35Z","content_type":null,"content_length":"6022","record_id":"<urn:uuid:63e2234a-8cb3-4cc3-b952-c8b5c0a8abfc>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: @matricked Find the value of x. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50c88006e4b0b766106e3f84","timestamp":"2014-04-19T07:36:25Z","content_type":null,"content_length":"55709","record_id":"<urn:uuid:ca96e1c6-5144-4939-a3a6-08b2e8f0137b>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
Fantasy Baseball Cafe Re: Formula for calculating value of stats If I'm reading your question right, just use a Z score formula. It's what sites like baseballmonster.com and Ziguana use to compare stats relatively. If you pay for baseballmonster.com you can enter your own projections in I think, but Z score formulas can be done in excel if you have the time and don't want to pay. Last edited by silentjim on Wed Mar 16, 2011 11:55 am, edited 1 time in total. Thanks for the site i'll take a look. And sorry I mis spoke - while I understand its easier to find steals later then it would other cats - i'm trying to just get a formula that would account as best as possible for the variance of value between each category. it may not be doable in excel without spending 2 days on it. Re: Formula for calculating value of stats Here's a good thread on the basketball side that has some baseball links in it as well. It's not as complicated as you think, but it does take some excel know how: Calculating Values Re: Formula for calculating value of stats pjs1856 wrote:That's part of the problem. I'm trying to find something that evens everything out . I know for my league I will need about 270 HR, 1100 R, 185 SB and around 1050 RBi - the problem is coming up with a formula that adds all those into a value. Actually the bigger problem is the %. I figured someone had something they could help with since this seems to be a great site for I'm not really sure what the hard part is. Pick any of the stats as your numerator and then use the 4 stats in the denominator. This will give you 4 ratios that you can use in an equation. This gives you the following numbers HR - 4, SB - 6, R - 1.05 and RBI - 1 Your equation is incredibly simplye Value = HRx4 + SBx6 + Rx1.05 + RBIx1 Juan Pierre = 560 Albert Pujols = 497 I have gone as far as making an equation for BA (and it works) but I'll have to find the old spreadsheet that I used to remember it. But I haven't spent much time working on this because I find it to be of little value. My website - The World Is Not That Big Re: Formula for calculating value of stats silentjim wrote:Here's a good thread on the basketball side that has some baseball links in it as well. It's not as complicated as you think, but it does take some excel know how: Calculating Values Thanks all for the look at this! I appreciate the help and I think this log from the bball one will probalby give me what i'm looking for. Re: Formula for calculating value of stats I have used the following hitting formula for 6 years and it seems to work well. I averaged fangraphs, mine & J35J projections this year and I like the results. For hitting categories, take your projections and use this formula. (BA x 333)+(HR x 3.33)+(RBI)+(RUNS)+(SB x 2.5) It uses a baseline that every category is 100 points. So 30 HR is 100 pts, and 40 SB is 100pts, etc. If you use less than 40 for SB, then the rankings come out with too much emphasis on speed. From the previous example above: Pujols comes out to be 511 Pierre comes out to be 354 Re: Formula for calculating value of stats rotoquest wrote:I have used the following hitting formula for 6 years and it seems to work well. I averaged fangraphs, mine & J35J projections this year and I like the results. For hitting categories, take your projections and use this formula. (BA x 333)+(HR x 3.33)+(RBI)+(RUNS)+(SB x 2.5) It uses a baseline that every category is 100 points. So 30 HR is 100 pts, and 40 SB is 100pts, etc. If you use less than 40 for SB, then the rankings come out with too much emphasis on speed. From the previous example above: Pujols comes out to be 511 Pierre comes out to be 354 Thanks! that's almost exactly what i'm looking for. Have a formula for pitching as well? Re: Formula for calculating value of stats Interesting read. Not a bad formula, I've tried it. linky
{"url":"http://www.fantasybaseballcafe.com/forums/viewtopic.php?t=424134&start=10","timestamp":"2014-04-16T22:00:54Z","content_type":null,"content_length":"75898","record_id":"<urn:uuid:74a7dcfa-4bab-4073-80f4-426e078e3b46>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
[pygtk] image manipulation Tim Evans t.evans at aranz.com Mon Sep 11 07:30:30 WST 2006 Rodrigo Renie Braga wrote: > ok, i'm doing some tests here with Numeric, and it seems that the > gtk.gdk.Pixbuf.get_pixels_array() returns an (Numeric) array with the > following shape: (w, h, 3), meaning a matrix with a dimension of WxHx3 ... > so, to access the rgb of a pixel on position (X, Y), i just use: > matrix[x][y][0] for red > matrix[x][y][1] for green > matrix[x][y][2] for blue The correct indices are actually: matrix[y][x][0] for red matrix[y][x][1] for green matrix[y][x][2] for blue Think of the matrix as a sequence of rows, where each row is a sequence of pixels, and each pixel is a 3-element vector of RGB (or 4-element RGBA). > is that right? And what is the best way to go through this matrix? maybe: > for i in matrix: > for j in i: > j[0] = color_red > j[1] = color_green > j[2] = color_blue > is this interaction changing the value of matrix? ( I don't lide the > looks of this loop.. :-) ) This will work; I'd just recommend using descriptive variable names to make it more obvious: for row in matrix: for pixel in row: pixel[0] = color_red pixel[1] = color_green pixel[2] = color_blue Tim Evans Applied Research Associates NZ More information about the pygtk mailing list
{"url":"http://www.daa.com.au/pipermail/pygtk/2006-September/012840.html","timestamp":"2014-04-21T01:59:18Z","content_type":null,"content_length":"3881","record_id":"<urn:uuid:7aaaa6c0-9640-47b5-9de9-ff96e45b6be6>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 78 - Neural Computing Surveys , 2001 "... A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation of the ..." Cited by 1492 (93 self) Add to MetaCart A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation of the original data. Well-known linear transformation methods include, for example, principal component analysis, factor analysis, and projection pursuit. A recently developed linear transformation method is independent component analysis (ICA), in which the desired representation is the one that minimizes the statistical dependence of the components of the representation. Such a representation seems to capture the essential structure of the data in many applications. In this paper, we survey the existing theory and methods for ICA. 1 , 1989 "... A new approach to unsupervised learning in a single-layer linear feedforward neural network is discussed. An optimality principle is proposed which is based upon preserving maximal information in the output units. An algorithm for unsupervised learning based upon a Hebbian learning rule, which achie ..." Cited by 218 (0 self) Add to MetaCart A new approach to unsupervised learning in a single-layer linear feedforward neural network is discussed. An optimality principle is proposed which is based upon preserving maximal information in the output units. An algorithm for unsupervised learning based upon a Hebbian learning rule, which achieves the desired optimality is presented, The algorithm finds the eigenvectors of the input correlation matrix, and it is proven to converge with probability one. An implementation which can train neural networks using only local "synaptic" modification rules is described. It is shown that the algorithm is closely related to algorithms in statistics (Factor Analysis and Principal Components Analysis) and neural networks (Self-supervised Backpropagation, or the "encoder" problem). It thus provides an explanation of certain neural network behavior in terms of classical statistical techniques. Examples of the use of a linear network for solving image coding and texture segmentation problems are presented. Also, it is shown that the algorithm can be used to find "visual receptive fields" which are qualitatively similar to those found in primate retina and visual cortex. "... A network of highly interconnected linear neuron-like processing units and a simple, local, unsupcrvised rule for the modification of connection strengths between these units are proposed. After training the network on a high (m) dimensional distribution of input vectors, the lower (n) dimensional o ..." Cited by 63 (0 self) Add to MetaCart A network of highly interconnected linear neuron-like processing units and a simple, local, unsupcrvised rule for the modification of connection strengths between these units are proposed. After training the network on a high (m) dimensional distribution of input vectors, the lower (n) dimensional output will be a projection into the subspace of the n largest principal components (the subspace spanned by the n eigenvectors of largest eigenvalues of the input covariance matrix) and maximize the mutual information between the input and the output in the same way as principal component analysis does. The purely local natu of the synaptic modification rule (simple Hebbian and anti-Hebbian) makes the implementation of the network easier, faster and biologically more plausible than rules depending on error propagation. - IEEE Signal Processing Letters , 2002 "... We consider the noiseless linear independent component analysis problem, in the case where the hidden sources s are non-negative. We assume that the random variables s i s are well-grounded in that they have a non-vanishing pdf in the (positive) neighbourhood of zero. For an orthonormal rotation y = ..." Cited by 63 (11 self) Add to MetaCart We consider the noiseless linear independent component analysis problem, in the case where the hidden sources s are non-negative. We assume that the random variables s i s are well-grounded in that they have a non-vanishing pdf in the (positive) neighbourhood of zero. For an orthonormal rotation y = Wx of pre-whitened observations x = QAs, under certain reasonable conditions we show that y is a permutation of the s (apart from a scaling factor) if and only if y is non-negative with probability 1. We suggest that this may enable the construction of practical learning algorithms, particularly for sparse non-negative sources. - IEEE Transactions on neural networks , 1995 "... Networks of linear units are the simplest kind of networks, where the basic questions related to learning, generalization, and self-organisation can sometimes be answered analytically. We survey most of the known results on linear networks, including: (1) back-propagation learning and the structure ..." Cited by 56 (4 self) Add to MetaCart Networks of linear units are the simplest kind of networks, where the basic questions related to learning, generalization, and self-organisation can sometimes be answered analytically. We survey most of the known results on linear networks, including: (1) back-propagation learning and the structure of the error function landscape; (2) the temporal evolution of generalization; (3) unsupervised learning algorithms and their properties. The connections to classical statistical ideas, such as principal component analysis (PCA), are emphasized as well as several simple but challenging open questions. A few new results are also spread across the paper, including an analysis of the effect of noise on back-propagation networks and a unified view of all unsupervised algorithms. Keywords--- linear networks, supervised and unsupervised learning, Hebbian learning, principal components, generalization, local minima, self-organisation I. Introduction This paper addresses the problems of - IEEE Trans. Pattern Analysis and Machine Intelligence , 2003 "... Abstract—Appearance-based image analysis techniques require fast computation of principal components of high-dimensional image vectors. We introduce a fast incremental principal component analysis (IPCA) algorithm, called candid covariance-free IPCA (CCIPCA), used to compute the principal components ..." Cited by 56 (9 self) Add to MetaCart Abstract—Appearance-based image analysis techniques require fast computation of principal components of high-dimensional image vectors. We introduce a fast incremental principal component analysis (IPCA) algorithm, called candid covariance-free IPCA (CCIPCA), used to compute the principal components of a sequence of samples incrementally without estimating the covariance matrix (so covariance-free). The new method is motivated by the concept of statistical efficiency (the estimate has the smallest variance given the observed data). To do this, it keeps the scale of observations and computes the mean of observations incrementally, which is an efficient estimate for some wellknown distributions (e.g., Gaussian), although the highest possible efficiency is not guaranteed in our case because of unknown sample distribution. The method is for real-time applications and, thus, it does not allow iterations. It converges very fast for high-dimensional image vectors. Some links between IPCA and the development of the cerebral cortex are also discussed. Index Terms—Principal component analysis, incremental principal component analysis, stochastic gradient ascent (SGA), generalized hebbian algorithm (GHA), orthogonal complement. æ 1 - Signal Processing , 1998 "... A number of neural learning rules have been recently proposed... In this paper, we show that in fact, ICA can be performed by very simple Hebbian or anti-Hebbian learning rules, which may have only weak relations to such information-theoretical quantities. Rather suprisingly, practically any non-lin ..." Cited by 56 (11 self) Add to MetaCart A number of neural learning rules have been recently proposed... In this paper, we show that in fact, ICA can be performed by very simple Hebbian or anti-Hebbian learning rules, which may have only weak relations to such information-theoretical quantities. Rather suprisingly, practically any non-linear function can be used in the learning rule, provided only that the sign of the Hebbian/ anti-Hebbian term is chosen correctly. In addition to the Hebbian-like mechanism, the weight vector is here constrained to have unit norm, and the data is preprocessed by prewhitening, or sphering. These results imply that one can choose the non-linearity so as to optimize desired statistical or numerical criteria. - Proc. IEEE , 1995 "... Abstract — This paper presents a tutorial overview of neural networks as signal processing tools for image compression. They are well suited to the problem of image compression due to their massively parallel and distributed architecture. Their characteristics are analogous to some of the features o ..." Cited by 34 (1 self) Add to MetaCart Abstract — This paper presents a tutorial overview of neural networks as signal processing tools for image compression. They are well suited to the problem of image compression due to their massively parallel and distributed architecture. Their characteristics are analogous to some of the features of our own visual system, which allow us to process visual information with much ease. For example, multilayer perceptrons can be used as nonlinear predictors in differential pulse-code modulation (DPCM). Such predictors have been shown to increase the predictive gain relative to a linear predictor. Another active area of research is in the application of Hebbian learning to the extraction of principal components, which are the basis vectors for the optimal linear Karhunen-Loève transform (KLT). These learning algorithms are iterative, have some computational advantages over standard eigendecomposition techniques, and can be made to adapt to changes in the input signal. Yet another model, the self-organizing feature map (SOFM), has been used with a great deal of success in the design of codebooks for vector quantization (VQ). The resulting codebooks are less sensitive to initial conditions than the standard LBG algorithm, and the topological ordering of the entries can be exploited to further increase coding efficiency and reduce computational complexity. I. "... We consider the task of independent component analysis when the independent sources are known to be nonnegative and well-grounded, so that they have a nonzero probability density function (pdf) in the region of zero. We propose the use of a "nonnegative principal component analysis (nonnegative PCA) ..." Cited by 26 (2 self) Add to MetaCart We consider the task of independent component analysis when the independent sources are known to be nonnegative and well-grounded, so that they have a nonzero probability density function (pdf) in the region of zero. We propose the use of a "nonnegative principal component analysis (nonnegative PCA) " algorithm, which is a special case of the nonlinear PCA algorithm, but with a rectification nonlinearity, and we conjecture that this algorithm will find such nonnegative well-grounded independent sources, under reasonable initial conditions. While the algorithm has proved difficult to analyze in the general case, we give some analytical results that are consistent with this conjecture and some numerical simulations that illustrate its operation. Index Terms independent component analysis learning (artificial intelligence) matrix decomposition principal component analysis independent component analysis nonlinear principal component analysis nonnegative PCA algorithm nonnegative matrix factorization nonzero probability density function rectification nonlinearity subspace learning rule ©2004 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any - IEEE Transactions on Automatic Control , 1996 "... A deterministic approach is proposed for proving the convergence of stochastic algorithms of the most general form, under necessary conditions on the input noise, and reasonable conditions on the (non-necessarily continuous) mean field. Emphasis is made on the case where more than one stationary poi ..." Cited by 25 (0 self) Add to MetaCart A deterministic approach is proposed for proving the convergence of stochastic algorithms of the most general form, under necessary conditions on the input noise, and reasonable conditions on the (non-necessarily continuous) mean field. Emphasis is made on the case where more than one stationary point exist. We use also this approach to prove the convergence of stochastic algorithm with Markovian dynamics. 1 Introduction The general structure of stochastic algorithms is the following : ` n = ` n\Gamma1 + fl n H(`n\Gamma1 ; Xn ) (1) fl n is a non-negative decreasing sequence, typically 1=n (or 1=n 2=3 when an averaging technique is used, cf [18]), Xn is a "somehow stationary" sequence and ` n is at step n the estimated solution of E[H(`; X)] = 0 where the expectation is taken over the distribution of X. Stochastic algorithms have a wide range of application in recursive system identification, adaptive filtering, pattern recognition, adaptive learning [20], sequential change detec...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=494939","timestamp":"2014-04-18T01:58:25Z","content_type":null,"content_length":"40807","record_id":"<urn:uuid:6abf9267-5d7d-47c1-8327-6b3fdcbb0b4e>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Tools Discussion: Software Help, Tutorial software for high school algebra Discussion: Software Help Topic: Tutorial software for high school algebra << see all messages in this topic < previous message | next message > Subject: RE: Tutorial software for high school algebra Author: bobk544 Date: Jan 28 2009 Wow that HippoCampus site is absolutely magnificent! The potential for a match teaching approach like this is unlimited! I am continually in and out of the need for complex math and continually having to reinvent my learning approach, but with a site like this, i can see unlimited potential for everyone! I would suggest sometime in the future adding a "reverse engineering" node where you could paste in a complex forumula from another reference and using Tex converted to MathMl being about to mouseover a component of a formula and linking that individual forumula component back to it's origins, other tools, interactive reference examples ect and futher enhancing those hierarchial diagrams of where you are in the hierarchy of mathematical nodes! Thanks Monterey for such a magnificent contribution to the educational process and in helping me personally, ie giving me hope for my lack of attention to math in my educational training! Reply to this message Quote this message when replying? yes no Post a new topic to the Software Help Discussion discussion Discussion Help
{"url":"http://mathforum.org/mathtools/discuss.html?context=dtype&do=r&msg=74103","timestamp":"2014-04-16T10:30:02Z","content_type":null,"content_length":"16256","record_id":"<urn:uuid:ab7b72f9-608a-4ac2-8ffc-7538bc41c6bd>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Glenbrook, CT Calculus Tutor Find a Glenbrook, CT Calculus Tutor ...It helps me to keep my students motivated, which makes the whole learning process much easier. It would be my pleasure to develop your or your children's knowledge. Regards, VilmosI got my bachelor degree in economic sciences and business administration, both in management and analysis. 37 Subjects: including calculus, statistics, GRE, geometry I have been a physics professor at one of the top universities in the country for 19 years. I have taught physics classes at all levels from 'Physics for poets" to advanced mechanics, quantum mechanics, and relativity. I use calculus, linear algebra, and other topics in mathematics extensively both in my teaching and in my research. 12 Subjects: including calculus, physics, algebra 1, algebra 2 ...Prior to business school I worked as a consultant at Ernst & Young. Before moving to this area I worked with a start-up in Akron that was looking to commercialize two of its patents. I have advanced computer skills with Microsoft word, Microsoft excel, Microsoft PowerPoint and Macros. 25 Subjects: including calculus, writing, geometry, accounting ...It would be a pleasure to tutor in any math or physics related subjects at the college freshman level or lower.I took an enriched geometry course back in my sophomore year of high school four years ago. As with the other math subjects I took in high school, I have used Geometry at much greater d... 9 Subjects: including calculus, physics, geometry, algebra 1 ...I have a degree in Business from New York University's Leonard Stern School of Business. I tutor SAT Math, ACT Math, and elementary through high school Math: Geometry, Pre-Algebra, Algebra 1, Algebra 2 and Trigonometry. I also tutor Science, History, Astronomy, and Grammar.I excelled at high school math, getting As. 29 Subjects: including calculus, chemistry, geometry, biology Related Glenbrook, CT Tutors Glenbrook, CT Accounting Tutors Glenbrook, CT ACT Tutors Glenbrook, CT Algebra Tutors Glenbrook, CT Algebra 2 Tutors Glenbrook, CT Calculus Tutors Glenbrook, CT Geometry Tutors Glenbrook, CT Math Tutors Glenbrook, CT Prealgebra Tutors Glenbrook, CT Precalculus Tutors Glenbrook, CT SAT Tutors Glenbrook, CT SAT Math Tutors Glenbrook, CT Science Tutors Glenbrook, CT Statistics Tutors Glenbrook, CT Trigonometry Tutors Nearby Cities With calculus Tutor Belle Haven, CT calculus Tutors East Norwalk, CT calculus Tutors Glenville, CT calculus Tutors Hillside, NY calculus Tutors Lewisboro, NY calculus Tutors Noroton Heights, CT calculus Tutors Noroton, CT calculus Tutors Ridgeway, CT calculus Tutors Rowayton, CT calculus Tutors Saugatuck, CT calculus Tutors Scotts Corners, NY calculus Tutors Springdale, CT calculus Tutors Stamford, CT calculus Tutors Tokeneke, CT calculus Tutors West Brentwood, NY calculus Tutors
{"url":"http://www.purplemath.com/glenbrook_ct_calculus_tutors.php","timestamp":"2014-04-21T13:20:44Z","content_type":null,"content_length":"24301","record_id":"<urn:uuid:96a643b8-9242-4b2c-b981-0b83fb48f505>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 27 , 2002 "... Following the long-held belief that the Internet is hierarchical, the network topology generators most widely used by the Internet research community, Transit-Stub and Tiers, create networks with a deliberately hierarchical structure. However, in 1999 a seminal paper by Faloutsos et al. revealed tha ..." Cited by 165 (14 self) Add to MetaCart Following the long-held belief that the Internet is hierarchical, the network topology generators most widely used by the Internet research community, Transit-Stub and Tiers, create networks with a deliberately hierarchical structure. However, in 1999 a seminal paper by Faloutsos et al. revealed that the Internet's degree distribution is a power-law. Because the degree distributions produced by the Transit-Stub and Tiers generators are not power-laws, the research community has largely dismissed them as inadequate and proposed new network generators that attempt to generate graphs with power-law degree distributions. - ACM SIGACT News - Distributed Computing Column , 2001 "... This article focuses on routing messages in distributed networks with efficient data structures. After an overview of the various results of the literature, we point some interestingly open problems. ..." Cited by 49 (12 self) Add to MetaCart This article focuses on routing messages in distributed networks with efficient data structures. After an overview of the various results of the literature, we point some interestingly open problems. , 2001 "... In an edge modification problem one has to change the edge set of a given graph as little as possible so as to satisfy a certain property. We prove the NP-hardness of a variety of edge modification problems with respect to some well-studied classes of graphs. These include perfect, chordal, chain, c ..." Cited by 41 (2 self) Add to MetaCart In an edge modification problem one has to change the edge set of a given graph as little as possible so as to satisfy a certain property. We prove the NP-hardness of a variety of edge modification problems with respect to some well-studied classes of graphs. These include perfect, chordal, chain, comparability, split and asteroidal triple free. We show that some of these problems become polynomial when the input graph has bounded degree. We also give a general constant factor approximation algorithm for deletion and editing problems on bounded degree graphs with respect to properties that can be characterized by a finite set of forbidden induced subgraphs. - Comput. Commun. Rev "... It has long been thought that the Internet, and its constituent networks, are hierarchical in nature. Consequently, the network topology generators most widely used by the Internet research community, GT-ITM [7] and Tiers [11], create networks with a deliberately hierarchical structure. However, rec ..." Cited by 32 (5 self) Add to MetaCart It has long been thought that the Internet, and its constituent networks, are hierarchical in nature. Consequently, the network topology generators most widely used by the Internet research community, GT-ITM [7] and Tiers [11], create networks with a deliberately hierarchical structure. However, recent work by Faloutsos et al. [13] revealed that the Internet’s degree distribution — the distribution of the number of connections routers or Autonomous Systems (ASs) have — is a power-law. The degree distributions produced by the GT-ITM and Tiers generators are not power-laws. To rectify this problem, several new network generators have recently been proposed that produce more realistic degree distributions; these new generators do not attempt to create a hierarchical structure but instead focus solely on the degree distribution. There are thus two families of network generators, structural generators that treat hierarchy as fundamental and degree-based generators that treat the degree distribution as fundamental. In this paper we use several topology metrics to compare the networks produced by these two families of generators to current measurements of the Internet graph. We find that the degree-based generators produce better models, at least according to our topology metrics, of both the AS-level and router-level Internet graphs. We then seek to resolve the seeming paradox that while the Internet certainly has hierarchy, it appears that the Internet graphs are better modeled by generators that do not explicitly construct hierarchies. We conclude our paper with a brief study of other network structures, such as the pointer structure in the web and the set of airline routes, some of which turn out to have metric properties similar to that of the Internet. 1 - Parallel Computing , 2001 "... For the design and analysis of algorithms that process huge data sets, a machine model is needed that handles parallel disks. There seems to be a dilemma between simple and flexible use of such a model and accurate modelling of details of the hardware. This paper explains how many aspects of this pr ..." Cited by 16 (3 self) Add to MetaCart For the design and analysis of algorithms that process huge data sets, a machine model is needed that handles parallel disks. There seems to be a dilemma between simple and flexible use of such a model and accurate modelling of details of the hardware. This paper explains how many aspects of this problem can be resolved. The programming model implements one large logical disk allowing concurrent access to arbitrary sets of variable size blocks. This model can be implemented efficienctly on multiple independent disks even if zones with different speed, communication bottlenecks and failed disks are allowed. These results not only provide useful algorithmic tools but also imply a theoretical justification for studying external memory algorithms using simple abstract models. , 1999 "... Experimentally it has been found that any two people in the world, chosen at random, are connected to one another by a short chain of intermediate acquaintances, of typical length about six. This phenomenon, colloquially referred to as the six degrees of separation, has been the subject of a conside ..." Cited by 14 (2 self) Add to MetaCart Experimentally it has been found that any two people in the world, chosen at random, are connected to one another by a short chain of intermediate acquaintances, of typical length about six. This phenomenon, colloquially referred to as the six degrees of separation, has been the subject of a considerable amount of recent research and modeling, which we review here. , 2001 "... Interval routing is a compact way for representing routing tables on a graph. It is based on grouping together, in each node, destination addresses that use the same outgoing edge in the routing table. Such groups of addresses are represented by some intervals of consecutive integers. We show th ..." Cited by 10 (4 self) Add to MetaCart Interval routing is a compact way for representing routing tables on a graph. It is based on grouping together, in each node, destination addresses that use the same outgoing edge in the routing table. Such groups of addresses are represented by some intervals of consecutive integers. We show that almost all the graphs, i.e., a fraction of at least 1 \Gamma 1=n 2 of all the n-node graphs, support a shortest path interval routing with three intervals per outgoing edge, even if the addresses of the nodes are arbitrarily fixed in advance and cannot be chosen by the designer of the routing scheme. In case the addresses are initialized randomly, we show that two intervals per outgoing edge suffice, and conversely, that two intervals are required, for almost all graphs. Finally, if the node addresses can be chosen as desired, we show how to design in polynomial time a shortest path interval routing with a single interval per outgoing edge, for all but at most O(log 3 n) outgoing edges in each node. It follows that almost all graphs support a shortest path routing scheme which requires at most n + O(log 4 n) bits of routing information per node, improving on the previous upper bound. - IEEE Trans. Inform. Theory , 2000 "... For a rational ff 2 (0; 1), let A n\Thetam;ff be the set of binary n \Theta m arrays in which each row has Hamming weight ffm and each column has Hamming weight ffn, where ffm and ffn are integers. (The special case of two-dimensional balanced arrays corresponds to ff = 1=2 and even values for n ..." Cited by 8 (1 self) Add to MetaCart For a rational ff 2 (0; 1), let A n\Thetam;ff be the set of binary n \Theta m arrays in which each row has Hamming weight ffm and each column has Hamming weight ffn, where ffm and ffn are integers. (The special case of two-dimensional balanced arrays corresponds to ff = 1=2 and even values for n and m.) The redundancy of A n\Thetam;ff is defined by ae n\Thetam;ff = nmH(ff) \Gamma log 2 jA n\ Thetam;ff j, where H(x) = \Gammax log 2 x \Gamma (1\Gammax) log 2 (1\Gammax). Bounds on ae n\Thetam;ff are obtained in terms of the redundancies of the sets A `;ff of all binary `-vectors with Hamming weight ff`, ` 2 fn; mg. Specifically, it is shown that ae n\Thetam;ff nae m;ff + mae n;ff ; where ae `;ff = `H(ff) \Gamma log 2 jA `;ff j, and that this bound is tight up to an additive term O(n + log m). A polynomial-time coding algorithm is presented that maps unconstrained input sequences into A n\Thetam;ff at a rate H(ff) \Gamma (ae m;ff =m) \Gamma (ae n;ff =n). Keywords: - SIAM J. Comput "... Abstract. We investigate topological, combinatorial, statistical, and enumeration properties of finite graphs with high Kolmogorov complexity (almost all graphs) using the novel incompressibility method. Example results are: (i) the mean and variance of the number of (possibly overlapping) ordered l ..." Cited by 8 (1 self) Add to MetaCart Abstract. We investigate topological, combinatorial, statistical, and enumeration properties of finite graphs with high Kolmogorov complexity (almost all graphs) using the novel incompressibility method. Example results are: (i) the mean and variance of the number of (possibly overlapping) ordered labeled subgraphs of a labeled graph as a function of its randomness deficiency (how far it falls short of the maximum possible Kolmogorov complexity) and (ii) a new elementary proof for the number of unlabeled graphs. Key words. Kolmogorov complexity, incompressiblity method, random graphs, enumeration of graphs, algorithmic information theory AMS subject classifications. 68Q30, 05C80, 05C35, 05C30 1. Introduction. The
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1434022","timestamp":"2014-04-18T12:36:30Z","content_type":null,"content_length":"36726","record_id":"<urn:uuid:4d5d5a50-2301-4a4e-864e-3eb490955a5e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Rule-based learning systems for support vector machines Haydemar Nuñez, Cecilio Angulo and Andreu Catala Neural Processing Letters 2006. ISSN 1370-4621 In this article, we propose some methods for deriving symbolic interpretation of data in the form of rule based learning systems by using Support Vector Machines (SVM). First, Radial Basis Function Neural Networks (RBFNN) learning techniques are explored, as is usual in the literature, since the local nature of this paradigm makes it a suitable platform for performing rule extraction. By using support vectors from a learned SVM it is possible in our approach to use any standard Radial Basis Function (RBF) learning technique for the rule extraction, whilst avoiding the overlapping between classes problem. We will show that merging node centers and support vectors explanation rules can be obtained in the form of ellipsoids and hyper-rectangles. Next, in a dual form, following the framework developed for RBFNN, we construct an algorithm for SVM. Taking SVM as the main paradigm, geometry in the input space is defined from a combination of support vectors and prototype vectors obtained from any clustering algorithm. Finally, randomness associated with clustering algorithms or RBF learning is avoided by using only a learned SVM to define the geometry of the studied region. The results obtained from a certain number of experiments on benchmarks in different domains are also given, leading to a conclusion on the viability of our proposal.
{"url":"http://eprints.pascal-network.org/archive/00002223/","timestamp":"2014-04-18T13:10:25Z","content_type":null,"content_length":"7754","record_id":"<urn:uuid:5af4e1a4-8d81-461c-b1ac-e91ff1ed1663>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Maximal set of commuting observables McLaren Rulez SpectraCat, my problem is like this: Let's say I have three commuting operators. Now, when I specify the common eigenstate, I should put in three quantum numbers. But how can I be sure that my three operators are all distinct and not trivial variations of one another like H and 2H are? Well, to me that problem seems contrived. If you know what the operators are well-enough to know that they commute, then you should be able to figure out if they have trivial relationships to each other. Operators correspond to physical observables (at least the ones we typically care about do), so just saying the name of the physical observable corresponding to the operator should tell you what you want to know. Mathematically, the answer that kof9595995 gave seems reasonable to me. Additionally, if you know the mathematical form of the eigenfunctions, and there is an unresolved degeneracy, then it may mean that there is an additional commuting observable that has not yet been specified. I guess my point is that I don't think this issue comes up in practice ... can you give an example of what you are talking about? I mean one that doesn't rely on contrived operators like 2H, but rather on operators corresponding to physical observables that have a linear dependency that is not immediately obvious.
{"url":"http://www.physicsforums.com/showthread.php?p=3364831","timestamp":"2014-04-16T19:07:19Z","content_type":null,"content_length":"49331","record_id":"<urn:uuid:68cd9e33-4c14-4ac3-9f32-2171abe2b1ba>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Idea for a fast and efficient prime sieve 06-15-2010 #1 The basic idea is pretty simple. We have an accumulator that stores the product of all primes up to N; if the GCD of the accumulator and the current index isn't one, then (the index is prime and) the accumulator is multiplied with the index. The algorithm is quite fast due to the rate of convergance of the GCD calculation, and the memory footprint *should* be comparable to that of conventional sieves (altho I haven't figured out yet a good approximation of the number of bits needed to store the results). Moreover, rather than requiring a buffer to mark off primes (as with conventional sieves), the memory is instead dedicated to some BigInteger type. Anyway, I haven't yet tested the theory using my own BigInteger class, as I need to either determine the number of bits needed or else add the funtionality to the class to automatically "grow" as calculations proceed, but I am working on that. So I'd like to know what you guys think - is the idea worth pursuing? Also, if anyone knows of a decent "bits_needed" approximation function, please let me know. Thanks! Oh, and here is a working example, in C++, using 32-bit unsigned int's (which, unfortunately, can only be used to generate all the primes up to 30!): template < typename UnsignedInteger > UnsignedInteger gcd( UnsignedInteger lhs, UnsignedInteger rhs ) for( ;; ) if( ( lhs %= rhs ) == 0 ) return rhs; if( ( rhs %= lhs ) == 0 ) return lhs; We never really get here - this just prevents a compiler warning... return UnsignedInteger( ); template < typename UnsignedInteger, typename Iterator > void generate_primes_up_to( UnsignedInteger const& val, Iterator out ) static UnsignedInteger const one = 1; acc = 1, idx = 2; idx <= val; if( gcd( acc, idx ) == one ) acc *= idx; *out++ = idx; #include <iostream> #include <iterator> int main( void ) using namespace lim = 30; generate_primes_up_to( lim, ostream_iterator< unsigned >( cout, "\n" ) ); One more thing - I realize that the generator function would be more efficient if it skipped even numbers, etc, but I chose to keep it simple, for the sake of clarity... if( numeric_limits< byte >::digits != bits_per_byte ) error( "program requires bits_per_byte-bit bytes" ); acc *= idx; This, however, can make the value grow quite fast? What if I wanted to list the first million primes - wouldn't I have to deal with the product of million ever-growing values? Otherwise it seems like a clever way to test if the given value is divisible by any of the primes found before. Not sure if it can be called a sieve, though. It looks that fundamentally this is still based on trial divisions (finding GCD), except instead of many divisions with smaller values you'll have less divisions with greater values... Last edited by anon; 06-20-2010 at 05:50 AM. I might be wrong. Thank you, anon. You sure know how to recognize different types of trees from quite a long way away. Quoted more than 1000 times (I hope). The result of your ongoing multiplication grows quickly initially (with a curve similar, if less steep, to the factorial function), but should slow down at very high numbers, if I understand correctly what the prime number theorem means. But it will definitely still grow quickly. For the sake of easy analysis, let's say it's O(n!) and disregard the lower density of prime numbers for high n. This is probably not algorithmically sound. The number of bits needed for a number is log2(n). So the space required for your algorithm is O(log(n!)). I'm not sure if this can be reduced to a simpler term, but it is definitely greater than the O(n) of a conventional sieve. But of course, if the n! is incorrect, the situation might be different. All the buzzt! "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code." - Flon's Law 06-20-2010 #2 The larch Join Date May 2006 06-21-2010 #3
{"url":"http://cboard.cprogramming.com/general-discussions/127802-idea-fast-efficient-prime-sieve.html","timestamp":"2014-04-16T22:39:24Z","content_type":null,"content_length":"47464","record_id":"<urn:uuid:1e542abb-4e63-4e74-944c-8d6d8103b0f3>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: How do you know that tan(1/2) = pi/4 and tan(1/4) = 1.. etc. etc? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/504883ace4b003bc12041545","timestamp":"2014-04-16T17:14:38Z","content_type":null,"content_length":"106247","record_id":"<urn:uuid:6d877ae2-3bc4-495c-9e59-370b8c909552>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
growth function and prove [IMG]file:///C:/DOCUME%7E1/ADMINI%7E1/LOCALS%7E1/Temp/moz-screenshot.png[/IMG]Can you help for the functions and prove at the attachment ? The key is to show that $\log_an=C_1\log_bn$ and $\log_bn=C_2\log_an$ for some constants $C1, C2$. Now logrithm is not such a scary beast. Personally, I remember only three properties of logarithms. (1) Definition: $\log_an=x$ iff $a^x=n$ (2) $\log_a(x^y)=y\log_ax$ (3) $\log_a(xy)=\log_ax+\log_ay$. It's not difficult to deduce (2) and (3) from (1), so the main thing you need to remember is that logarithm is an inverse function to power: $a^{\log_ax}=x$ and $\log_a(a^x)=x$. So, how to express $\log_an$ through $\log_bn$? Let $\log_an=x$. By (1), $a^x=n$. We need to use $\log_bn$, so let's take $\log_b$ of both sides: $\log_b(a^x)=\log_bn$. By (2), $x\log_ba=\log_bn$. Recalling the definition of $x$, we get $\log_ba\log_an=\log_bn$. It's also easy to remember this last formula. Here is a mnemonic (not mathematical) explanation: you go from $b$ to $a$ ( $\log_ba$), then from $a$ to $n$ ( $\log_an$) and the result is from $b$ to $n$ ( $\log_bn$).
{"url":"http://mathhelpforum.com/discrete-math/116897-growth-function-prove.html","timestamp":"2014-04-20T03:25:56Z","content_type":null,"content_length":"44561","record_id":"<urn:uuid:1376dbdd-32f8-4d86-9ce8-7e3fea0fe8a6>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
esm_hibbeler_engmech_10|Kinematics of a Particle|True or False 1 . Dynamics is concerned with bodies that only have accelerated motion. 2 . A particle has a mass but negligible size and shape. 3 . Some objects can be considered as particles provided motion of the body is characterized by motion of its mass center and any rotation of the body can be neglected. 4 . The distance the particle travels is a vector quantity. 5 . The average velocity of a particle is different than its average speed. 6 . If the acceleration is zero, the particle cannot move. 7 . A position vector r acting along a straight axis s, can never change direction. 8 . A position vector r acting along a straight axis s can change its magnitude and sense of direction. 9 . Kinematics is concerned with the forces that cause the motion. 10 . Speed represents the magnitude of velocity. 11 . Average speed represents the total distance traveled divided by the total time. 12 . A particle can have an acceleration and yet have zero velocity. 13 . The equation v = v[0] + a[c]t can be used when a = (3 t) m/s^2. 14 . For rectilinear kinematics, the direction of v is defined by its algebraic sign. 15 . For rectilinear kinematics, the positive direction of the coordinate axis can be directed to the left. 16 . If v = (4 t^3) m/s, it is necessary to know the position of the particle at a given instant in order to determine the position of the particle at another instant.
{"url":"http://wps.prenhall.com/esm_hibbeler_engmech_10/14/3722/952913.cw/index.html","timestamp":"2014-04-20T00:55:19Z","content_type":null,"content_length":"20464","record_id":"<urn:uuid:b01a479b-2b06-4950-838f-36f8bdbd8b43>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Escondido Calculus Tutor I have over 50 years of experience and 25 years of experience as a drafter and designer, including 26 years of CAD primarily AutoCAD including 2 years as a CAD instructor. I also provided CAD technical support and ran 2 CAD user groups for 15 years. I taught for 2 years at Southwestern College. 37 Subjects: including calculus, reading, physics, English ...I have 24 credit hours from University of Texas where I took Graduate Courses in Applied Mathematics. I also have a Masters degree in Operations Research Engineering from Stanford University which involves advance courses in Probability, Stochastic processes, Discrete Math, Differential Equation... 24 Subjects: including calculus, reading, physics, statistics ...But my firm belief is that anyone can learn anything given enough time, patience and inspiration. Even the most challenging subjects can be mastered by a willing learner and an able teacher. My primary focus as a tutor is to first help struggling learners overcome the fear of math. 6 Subjects: including calculus, geometry, algebra 1, algebra 2 ...I have always loved and excelled in the mathematics/algebra field. I have graduated from San Diego State University twice: once with my bachelor's in business administration with an emphasis in finance and management; and secondly with an MBA. Education is important and I let you know that often. 11 Subjects: including calculus, geometry, public speaking, GMAT ...My goal when teaching the subject is to explain why even though when ice cream sales increase that drownings also increase it is not because people are drowning in their ice cream. On both the SAT Math section and the SAT Math 2 Subject Test I have scored a perfect 800. I mastered the subject material back in high school and continue to apply it in my daily life as an engineer. 19 Subjects: including calculus, chemistry, physics, writing
{"url":"http://www.purplemath.com/escondido_calculus_tutors.php","timestamp":"2014-04-20T19:16:34Z","content_type":null,"content_length":"23987","record_id":"<urn:uuid:0beebf29-6458-48ff-9713-e4a7543779ce>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Margins after xtprobit, re. Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Margins after xtprobit, re. From Maarten Buis <maartenlbuis@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Margins after xtprobit, re. Date Tue, 30 Aug 2011 19:46:37 +0200 On Tue, Aug 30, 2011 at 6:47 PM, natasha agarwal wrote: > I was trying to estimate the following model: > xi: xtprobit expdum a b a*b y14-y18 i.industry i.region, re > where a = continuous and b = categorical. > Now I wanted to compute the average partial effect of the > specification with the main interest lying at the estimated > coefficient on the interaction term. > I use margins, predict(pu0) dydx(*) > I do get the marginal effects for all the variables in the > specification but I wanted to know whether the average marginal effect > calculated by margins for the interaction term is correct and how > would one interpret the same? The marginal effect for the interaction term is wrong, see: Norton et al. (2004). In general if you want to use -margins- you should not use -xi- but use the factor variable notation instead. It is crucial that all interactions are also created with the factor variable notation. A consequence will be that no marginal effects for the interaction term will be computed, but that is much better than a wrong marginal There is no easy way to get correct average marginal effects for interaction terms in such multi-level model, as doing that right you also need to average over unobserved group level error term. The best and easiest way is to use -xtlogit- and interpret the odds ratios. They have a bad reputation as being hard to interpret, but if you take the logic step by step it becomes suddenly easy. See for example: Buis (2010). A compendium of several statalist post on this issue can be found at <http://www.maartenbuis.nl/publications/interactions.html>. Hope this helps, Maarten L. Buis (2010) "Stata tip 87: Interpretation of interactions in non-linear models", The Stata Journal, 10(2), pp. 305-308. Edward Norton, Hua Wang, and Chunrong Ai (2004) "Computing interaction effects and standard errors in logit and probit models" The Stata Journal, 4(2): 154-167. Maarten L. Buis Institut fuer Soziologie Universitaet Tuebingen Wilhelmstrasse 36 72074 Tuebingen * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-08/msg01455.html","timestamp":"2014-04-16T17:01:18Z","content_type":null,"content_length":"9579","record_id":"<urn:uuid:f4ab4a7a-6067-4586-a483-51b8ff5f30f1>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Stretchy Operators Re: Stretchy Operators • To: rminer@geom.umn.edu, w3c-math-erb@w3.org • Subject: Re: Stretchy Operators • From: ion@math.ams.org (Patrick D. F. Ion) • Date: Wed, 03 Jul 1996 17:56:37 -0400 • From ion@math.ams.org Wed Jul 3 17: 53:10 1996 • Message-Id: <v02130507ae0092d7c699@[130.44.25.36]> • X-Sender: ion@mr4.mr.ams.org My personal position on the aesthetics of growing matching fences has always been very conservative. For instance I feel that the surrounding parentheses in x + 1 ( ----- ) x - 1 should indeed have enough height to cover the numerator and denominator. The display list you have looks right to me. However, I think it is just an algorithmic hack to decide that the fence heights _always_ have to exceed the highest and lowest excursions of numerator and denominator, as in something like ___ oo x + a | | B (x) + C(x) + D i=1 i ( --------------------------------- ) x - 1 or a worse case with an integral that I can't readily do in ASCII. There are expressions in high-energy physics that I don't think used to have matching parens before algorithmic typsetting took over from visually oriented. For instance, in the case ||x-a| sin(b)| the important thing is probably that the outer vert pair be associated together and not be confused with the inner. So we have a case here of a modulus of a product of a modulus by something signed. A similar example could be constructed with norms instead of the presumed absolute values using double (or triple, or subscripted) verts. Personally I don't think the outer modulus here needs to be displayed larger at all. If you want always to size fences automatically increasingly outward then one method has got to be a pairing of fences, and a rule such as alluded to above about covering the extreme excursions of any expression nested within. But that leads to things like the TeX dummy \. fences to allow balancing, and to difficulties when an "bracketed expression" continues over a line. In addition there are constructions like the semi-open interval notation [a,b) which might have to have explicit pairings added. □ From: Ka-Ping Yee <s-ping@orange.cv.tottori-u.ac.jp>
{"url":"http://lists.w3.org/Archives/Public/w3c-math-erb/msg00444.html","timestamp":"2014-04-19T05:36:26Z","content_type":null,"content_length":"4232","record_id":"<urn:uuid:3826ff39-1422-47b0-8627-2df77dd1d62e>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
MathFiction: Turbulence (Giles Foden) A British meteorologist is stationed in Scotland during World War II not to simply run a weather station (which is his cover), but to get to know the brilliant Wallace Ryman and learn to use his mathematical approach to weather prediction. Of course, weather prediction and modeling turbulence are not necessarily mathematical subjects. There are those who address them experimentally (for example, looking at the flow of a liquid around a sphere in a laboratory) or entirely non-scientifically ("my knee hurts, so it will rain tomorrow"). Therefore, it is worth mentioning here that in this book, the approach is decidedly mathematical. Ryman is described several times as being a mathematician, it is emphasized that Meadows (the young meteorologist) was a prize winning math student, much of the focus is on the Ryman Number which measures turbulence, and discussions of mathematics occur throughout the book. Although this justifies the inclusion of the book in this database, it does not really explain what the book is about since, besides the mathematics that underlies the plot, there is a great deal of human interest as well. Ryman is a pacifist who refuses to help the British government with the war. Meadows is a young man, desperate to prove himself romantically and scientifically, and still scarred by the horrific death of his parents in a mudslide. Ryman's beautiful wife wishes to be a mother, and thinks her fertility problems might be resolved if Meadows were the father rather than her husband. And, of course, there is the war and D-Day, which add to the tension despite the fact that the reader knows well how that will turn out anyway. As one might expect from the author of the acclaimed Last King of Scotland, all of this is handled very well. However, for obvious reasons, I will concentrate in the rest of this review on the mathematical aspects. Here, for example, are a few of the more mathematical passages from the book: (quoted from Turbulence) "The Ryman method involves describing every weather situation in figures and making a mathematically informed estimate of how it could develop", I said. "He divides the atmosphere into three-dimensional parcels of air and assigns numerical values to each aspect of the weather within them. Then he uses maths to see where things may go." "The truth is, Meadows, he's not an easy man but he is a brilliant one and the British meteorological community has felt the lack of him keenly. Now we come to the nub of the thing. Are you acquainted with the so-called Ryman number?" I was coming to the limit of my knowledge. "Only in the most basic sense, sir," I admitted. "It explains the dynamic relationship between the two type of energy, kinetic and potential, that change the weather." Sir Peter nodded. He did not seem surprised. "No one has got much beyond the basics. That is what I' am sending you to Scotland for. Though I once used some of his work, I myself know only a little about this side of things." "Why do you need to...? If I may ask...?" "The Ryman number is of enormous significance because it defines the amount of turbulence in a given situation. ... The government wants to use this number for a particular operation. Airborne and amphibious and enormous in scale. The long-expected invasion across the Channel into mainland Europe. We think Ryan himself is the only man alive who really understand how a range of values of his number might be practically applied - around a specific geographical area and over a particular time window - but he has not responded to my letters." (quoted from Turbulence) I spent the second night in the cot-house, as I would many over the forthcoming four months, doing calculations - sometimes in my head, sometimes with a wooden slide-rule, notched and ink-stained, which I still possess. Squeezing precision out of continuous domains in a mustering tumult of differential calculus - such was my life in that strange time. Lying on the bed doing calculus. Sitting on the crapper doing calculus. Shaving doing calculus. Doing calculus while listening to the radio, hearing what was going on in the war or, for preference, some classical music. Doing calculus while eating. Doing calculus while squeezing the toothpaste tube. (quoted from Turbulence) "He has a messy little beard, wears specs. Looks a bit peculiar. Holes in his suit jacket." She giggled. "You better watch out or that's what you will become." "Why?" I asked, affronted. "All you scientists end up that way." "What do you mean?" "You have no style. All you think about are your equations." "Ah," I said, rising to the challenge. "That that is just it, don't you see? The style is in the equations. Some people write ugly proofs, others do it with panache. I like to think mine are as beautiful as, as -- well, anything!" I watched her face become aghast. "Anything? Anything is not beautiful. Only special things are beautiful." I felt embarrassed at my inability to express myself. "All right, Miss, if you say so. But one day I'll show you some of those equations and you will see what I mean." A few little things are clear indicators to me that the author is no expert in mathematics. (For instance, a mathematician never talks of "solving" a computation. One performs computations. Equations are solved when one finds a choice of objects which make it true. Also, I do not think I have ever before seen a derivative written as "dT/∂t", a quotient of a total and partial differential, such as shows up here in the definition of the Ryman number.) On the other hand, what he writes about mathematics is reasonable (though not particularly deep or interesting) and positive. In fact, I would say that this book is the kind of book I was thinking of when I started this website. My original goal in starting this directory of mathematical fiction was simply to select works of fiction that would serve as good "propaganda" for mathematics, helping readers to recognize the beauty and usefulness of this academic discipline, and this book certainly can do that. As with many works of historical fiction, it is sometimes difficult to keep reality separate from the fiction. For instance, Pyke (the man described in the last quote above) is a real scientist who worked with the idea of using ice for construction. So far as I know, none of the other major characters were actual historical figures, although the author claims to have partially based Ryman on Lewis Fry Richardson. (Note: According to this article in the AMS Notices, it was Rossby, a student of Richardson, who made important computations regarding weather that were utilized on D-Day.) Similarly, although there are things similar to the Ryman number in many areas of applied mathematics, the Ryman number itself is fictional. Overall, this is a nicely written (though sometimes overly melodramatic) novel about World War II with interesting characters, many of whom happen to talk about mathematics frequently. I am certain that many regular visitors to this website will enjoy reading it. If you do, please help me out by posting your own opinions and comments below using the link in the Ratings section. Contributed by Paul The book missed the point that the final weather decision was made on foot of the meteorological report from Belmullet in the North West of Ireland which showed an approaching high and tilted the balance to D day being on June 6th. This is well documentated and indeed is celebrated in Belmullet annually. It showed there was a short window of opportunity which would be in the English Channel area on the 6th after the poor weather of the 5th. This was truly the deciding weather report.
{"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf944","timestamp":"2014-04-16T11:31:38Z","content_type":null,"content_length":"16912","record_id":"<urn:uuid:57df37d4-ee88-45ea-841c-8a479f24ff77>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Department, Princeton University Math Physics Seminar, Ajay Chandran, "Construction and Analysis of a hierarchical massless QFT" l also show how methods from statistical mechanics can be used to prove the full scale invariance of this QFT. (Joint work with Abdelmalek Abdesselam and Gianluca Guadagni.) Location: Jadwin A06 Date/Time: 10/01/13 at 4:30 pm - 10/01/13 at 6:00 pm Category: Mathematical Physics Seminar Department: PCTS
{"url":"http://www.princeton.edu/physics/events_archive/viewevent.xml?id=688","timestamp":"2014-04-20T11:38:48Z","content_type":null,"content_length":"9820","record_id":"<urn:uuid:ffd43499-d420-4743-b404-f3886cfae889>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
binary notation You've asked to convert 1474 to base 2 (binary notation) We iteratively divide by 2. Depending on whether the division has a remainder we add a 1 (remainder) or a 0 (no remainder) to the left of the binary number we generate. We start with the number in base 10 you provided: 1474:2=737, no remainder. So we add a 0 to the left of our binary number: 0 737:2=368, and the remainder is 1. So we add a 1 to the left of our binary number: 10 368:2=184, no remainder. So we add a 0 to the left of our binary number: 010 184:2=92, no remainder. So we add a 0 to the left of our binary number: 0010 92:2=46, no remainder. So we add a 0 to the left of our binary number: 00010 46:2=23, no remainder. So we add a 0 to the left of our binary number: 000010 23:2=11, and the remainder is 1. So we add a 1 to the left of our binary number: 1000010 11:2=5, and the remainder is 1. So we add a 1 to the left of our binary number: 11000010 5:2=2, and the remainder is 1. So we add a 1 to the left of our binary number: 111000010 2:2=1, no remainder. So we add a 0 to the left of our binary number: 0111000010 1:2=0, and the remainder is 1. So we add a 1 to the left of our binary number: 10111000010 The remaining fraction is zero, so we stop dividing. 1474 (base10) -> 10111000010 (base2) Enter new base 10 number: Check here why 10111000010 is a base 2 representation of 1474
{"url":"http://www.cs.odu.edu/~jbollen/cgi-bin/dec2bin.cgi?dec=1474","timestamp":"2014-04-17T01:03:07Z","content_type":null,"content_length":"2046","record_id":"<urn:uuid:bd047655-3197-41aa-aaa3-38f1be39042e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
y Logarithmic Combinatorial - Combinatorics, Probability and Computing , 2000 "... We show that the limiting distribution of the number of comparisons used by Hoare's quickselect algorithm when given a random permutation of n elements for finding the m-th smallest element, where m = o(n), is the Dickman function. The limiting distribution of the number of exchanges is also derived ..." Cited by 24 (1 self) Add to MetaCart We show that the limiting distribution of the number of comparisons used by Hoare's quickselect algorithm when given a random permutation of n elements for finding the m-th smallest element, where m = o(n), is the Dickman function. The limiting distribution of the number of exchanges is also derived. 1 Quickselect Quickselect is one of the simplest and e#cient algorithms in practice for finding specified order statistics in a given sequence. It was invented by Hoare [19] and uses the usual partitioning procedure of quicksort: choose first a partitioning key, say x; regroup the given sequence into two parts corresponding to elements whose values are less than and larger than x, respectively; then decide, according to the size of the smaller subgroup, which part to continue recursively or to stop if x is the desired order statistics; see Figure 1 for an illustration in terms of binary search trees. For more details, see Guibas [15] and Mahmoud [26]. This algorithm , although ine#cient in the worst case, has linear mean when given a sequence of n independent and identically distributed continuous random variables, or equivalently, when given a random permutation of n elements, where, here and throughout this paper, all n! permutations are equally likely. Let C n,m denote the number of comparisons used by quickselect for finding the m-th smallest element in a random permutation, where the first partitioning stage uses n 1 comparisons. Knuth [23] was the first to show, by some di#erencing argument, that E(C n,m ) = 2 (n + 3 + (n + 1)H n (m + 2)Hm (n + 3 -m)H n+1-m ) , n, where Hm = 1#k#m k -1 . A more transparent asymptotic approximation is E(C n,m ) (#), (#) := 2 #), # Part of the work of this author was done while he was visiting School of C... , 2004 "... Qian, Luscombe, and Gerstein (2001) introduced a model of the diversification of protein folds in a genome that we may formulate as follows. Consider a multitype Yule process starting with one individual in which there are no deaths and each individual gives birth to a new individual at rate one. Wh ..." Cited by 3 (1 self) Add to MetaCart Qian, Luscombe, and Gerstein (2001) introduced a model of the diversification of protein folds in a genome that we may formulate as follows. Consider a multitype Yule process starting with one individual in which there are no deaths and each individual gives birth to a new individual at rate one. When a new individual is born, it has the same type as its parent with probability 1 − r and is a new type, different from all previously observed types, with probability r. We refer to individuals with the same type as families and provide an approximation to the joint distribution of family sizes when the population size reaches N. We also show that if 1 ≪ S ≪ N 1−r, then the number of families of size at least S is approximately CNS −1/(1−r) , while if N 1−r ≪ S the distribution decays more rapidly than any power. Running head: Power laws for gene family sizes. ∗ Partially supported by NSF grants from the probability program (0202935) and from a joint DMS/NIGMS initiative to support research in mathematical biology (0201037). † Supported by an NSF Postdoctoral Fellowship. , 2009 "... We deal with the random combinatorial structures called assemblies. By weakening the logarithmic condition which assures regularity of the number of components of a given order, we extend the notion of logarithmic assemblies. Using the author’s analytic approach, we generalize the so-called Fundamen ..." Cited by 1 (1 self) Add to MetaCart We deal with the random combinatorial structures called assemblies. By weakening the logarithmic condition which assures regularity of the number of components of a given order, we extend the notion of logarithmic assemblies. Using the author’s analytic approach, we generalize the so-called Fundamental Lemma giving independent process approximation in the total variation distance of the component structure of an assembly. To evaluate the influence of strongly dependent large components, we obtain estimates of the appropriate conditional probabilities by unconditioned ones. These estimates are applied to examine additive functions defined on such a class of structures. Some analogs of Major’s and Feller’s theorems which concern almost sure behavior of sums of independent random variables are proved. 1 "... ABSTRACT: For a subset S of positive integers let �(n, S) be the set of partitions of n into summands that are elements of S. For every λ ∈ �(n, S), let Mn(λ) be the number of parts, with multiplicity, that λ has. Put a uniform probability distribution on �(n, S), and regard Mn as a random variable. ..." Add to MetaCart ABSTRACT: For a subset S of positive integers let �(n, S) be the set of partitions of n into summands that are elements of S. For every λ ∈ �(n, S), let Mn(λ) be the number of parts, with multiplicity, that λ has. Put a uniform probability distribution on �(n, S), and regard Mn as a random variable. In this paper the limiting density of the (suitably normalized) random variable Mn is determined for sets that are sufficiently regular. In particular, our results cover the case S ={Q(k) : k ≥ 1}, where Q(x) is a fixed polynomial of degree d ≥ 2. For specific choices of Q, the limiting density has appeared before in rather different contexts such as Kingman’s coalescent, and processes associated with the maxima of Brownian bridge and Brownian meander processes. © 2007 Wiley Periodicals, Inc. Random "... partitions with parts in the range of a polynomial ∗ ..." , 2008 "... Consider n players whose “scores ” are independent and identically distributed values {Xi} n i=1 from some discrete distribution F. We pay special attention to the cases where (i) F is geometric with parameter p → 0 and (ii) F is uniform on {1,2,...,N}; the latter case clearly corresponds to the cla ..." Add to MetaCart Consider n players whose “scores ” are independent and identically distributed values {Xi} n i=1 from some discrete distribution F. We pay special attention to the cases where (i) F is geometric with parameter p → 0 and (ii) F is uniform on {1,2,...,N}; the latter case clearly corresponds to the classical occupancy problem. The quantities of interest to us are, first, the U-statistic W which counts the number of “ties” between pairs i,j; second, the univariate statistic Yr, which counts the number of strict r-way ties between contestants, i.e., episodes of the 1 form Xi1 = Xi2 =... = Xir; Xj = Xi1;j = i1,i2,...,ir; and, last but not least, the multivariate vector ZAB = (YA,YA+1,...,YB). We provide Poisson approximations for the distributions of W, Yr and ZAB under some general conditions. New results on the joint distribution of cell counts in the occupancy problem are derived as a corollary. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=271913","timestamp":"2014-04-17T15:58:56Z","content_type":null,"content_length":"27303","record_id":"<urn:uuid:a40731c1-e831-440f-941b-c61284c6ac25>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
NAHB: The Tax Benefits of Homeownership The Tax Benefits of Homeownership Special Studies, March 27, 2009 By Robert D. Dietz, Ph.D. Report available to the public as a courtesy of HousingEconomics.com Purchasing a home is typically the largest purchase and among the most important financial decision a family makes. There are numerous factors that influence the home buying decision, and among the most important are the tax benefits that help offset some of the cost of homeownership.[1] Previous NAHB research has discussed the federal government’s (flawed) budget measurement and policy justifications for these housing tax law provisions. This article examines how these tax benefits reduce the cost of homeownership for individual homeowners and homebuyers for certain mortgage amounts and income levels. Using the methods developed in the paper, a household, for example, with $80,000 in annual income who obtains a $200,000 mortgage will save on average $1,765 in the first year of homeownership. By the end of the fifth year of homeownership, the household will save on average $8,607 on taxes, and this amount grows to $19,488 by the end of the average period ownership — twelve years. This stylized homeowner can expect to save $21,650 in capital gains taxation, yielding a total benefit of $41,138 over the expected period of homeownership. Further, the paper provides variants of these calculations if the analysis allows the homeowner’s income to increase with their age and labor market experience. For example, the five-year tax savings for this homeowner increases to $9,723. The paper also considers how these numbers are increased by the existence of the temporary $8,000 first-time home buyer tax credit. In the case illustrated above, the five-year tax savings estimate increases 82% from $9,723 to $17,723. Homeownership Tax Benefits There are three major tax benefits for homeowners: deductibility of mortgage interest, deductibility of real estate taxes, and the capital gain tax exclusion for principal residences.[2] Taken together, these benefits significantly reduce the cost of homeownership. Each represents a significant provision of law. According to the Congressional Joint Committee on Taxation, for fiscal year 2008 the tax expenditure (approximately the size of the program in terms of tax savings) of the mortgage interest deduction totals $67.0 billion, the real estate tax deduction equals $24.6 billion, and the capital gain exclusion sums to $16.8 billion.[3] As seen in these estimates, the largest benefit for most homebuyers is the ability to deduct home mortgage interest. The tax code permits homeowners who itemize their federal income tax deductions to reduce their taxable income by the annual amount of mortgage interest paid on a first (and second) home, up to $1 million in total home mortgage debt. Further, taxpayers may deduct interest allocable to up to $100,000 of home equity loans.[4] For the purpose of the Alternative Minimum Tax [AMT], taxpayers may deduct non-home equity loan interest from AMT taxable income as well.[5] Itemizing homeowners may also deduct state and local real estate taxes paid on an owner-occupied home.[6] Finally, taxpayers may exclude from capital gains taxation the proceeds from the sale of a principal residence. Taxpayers are limited in the amount of gains that may be excluded from tax: $500,000 of gain for married homeowners and $250,000 for single homeowners. Recent changes in tax law reduce these maximum exclusion amounts proportionally for the amount of time the home is actually used as a principal residence. Periods of ownership prior to January 1, 2009 are treated as periods of principal residence use under a grandfathering rule included in the law. Measuring the Tax Benefits of the Mortgage Interest and Real Estate Tax Deductions Calculating the net benefits of the major homeownership benefits seems straightforward but can lead to overestimation if not done in the context of other income tax rules. At first glance, the monetary value of the deductions is equal to the sum of the deductions times the marginal tax rate. For example, a homeowner who deducts $10,000 of mortgage interest and real estate tax deductions and who is in the 25% tax bracket would theoretically realize a tax savings of $2,500 on his/her income tax return. However, this calculation overstates the benefit on average by failing to account for the fact that the taxpayer must itemize in order to receive a net benefit from these deductions. Unless the sum of the taxpayer’s itemized deductions exceeds the standard deduction (the deduction available in lieu of itemization), it is not to the taxpayer’s advantage to itemize. This itemization decision implies that a certain amount of the summed itemized deductions yields no net benefit to the taxpayer because of the standard deduction. For example, if a taxpayer in the 25% tax bracket has a standard deduction of $5,700 and a set of itemized deductions totaling $6,000, the net value of the deductions is not equal to $1,500 (25% times $6,000). Even with no itemized deductions available, the standard deduction is available to reduce tax payment by $1,425 (25% times $5,700). So the true, incremental value of the itemized deductions in this example is equal to the difference between $1,500 and $1,425 or $75. Of course, the marginal value — the value of the next dollar of deductions — is equal to 25 cents, but it is the average net value that is important in determining the realized value of the homeownership tax benefits. Calculating an Example We can now estimate the true tax benefits of homeownership for examples of various taxpayers. Consider a homebuyer with gross income of $60,000 who purchases a principal residence in tax year 2009 with a mortgage of $180,000. Assuming a mortgage interest rate of 5.86%, the first year mortgage interest payment is approximately equal to $10,580.[7] Conservatively, assume that the buyer uses a downpayment of 20%, so the purchase price of the home is $225,000. Further assume that property taxes are equal to 1.2% of the market price.[8] Thus, this taxpayer also pays $2,700 in potentially deductible state and local real estate taxes in the first year of ownership. Assuming the taxpayer is married and files a joint return, the household could claim a standard deduction of $11,400 in 2009. Clearly, with $13,280 in itemized deductions from mortgage interest and real estate taxes alone, the taxpayer will not claim the standard deduction, thus itemizing their deductions on Schedule A of their 1040 income tax return. However, to calculate the net benefit of the housing tax deductions, we need an estimate of all the other itemized deductions in order calculate the incremental value. Using Internal Revenue Service Statistics of Income data for 2006, we estimate the average sum of all non-housing itemized deductions by income class. For this stylized taxpayer, the estimated total is equal to $6,936 in charitable, state and local income or sales taxes, personal property taxes, and all other itemized deductions. With this information, we can calculate the taxpayer’s taxable income (gross income minus itemized deductions) and marginal income tax rate of 15%. Now we can estimate the net value of the housing benefits. The net value is equal to the sum of itemized deductions ($13,280) minus the difference of the standard deduction ($11,400) and sum of the non-housing itemized deductions ($6,936) times the marginal tax rate of 15%. This calculation yields a net benefit for the first year of homeownership equal to $1,322. Using this approach and adjusting the declining annual mortgage interest payment consistent with a self-amortizing loan, we can calculate average tax savings for certain income classes and mortgage amounts. [9] Table 1 provides these amounts for the first year of homeownership. (Table 1) The example calculated above is found in the row for $180,000 in mortgage and $60,000 in borrower income. As can be seen in this table, the benefits of the tax provisions increase in terms of borrower income and mortgage amount. Nonetheless, as demonstrated in a previous article most of these benefits are claimed by middle-income homeowners ($40,000 to $200,000 AGI). Summing over the first five years of homeownership, and adjusting for the declining mortgage interest payment over time, yields the following estimates, shown in Table 2. (Table 2) Previous NAHB research indicates that twelve years is a reasonable estimate for the average duration of homeownership of a single dwelling. Correspondingly, the twelve-year estimates using the method in this paper are shown in Table 3. (Table 3) Graphing three examples of these results yields the following year-by-year estimates of the tax savings of homeownership attributable to the mortgage interest and real estate tax deductions, as seen in Figure 1. (Figure 1) Principal Residence Gain Exclusion The final major housing tax incentive is the exclusion of capital gains for the sale of a principal residence. To calculate the benefit of this tax provision, we must forecast the average price appreciation over the average duration of homeownership. We use the average housing price appreciation rate over the prior 20 years, which includes the historic price declines of 2007 and 2008 as well as the period of unprecedented price appreciation that preceded it. This average is 4.23% according to the Case-Shiller National U.S. Home Price Index. We use a conservative estimate of the capital gains tax rate (15% under present law, despite the likelihood that it will increase to 20% in 2011) to calculate the tax benefit of the exclusion. With these parameters and assuming that the home is sold at the end of twelve years of homeownership, we can calculate the tax benefits realized by the capital gain exclusion, which are reported in Table 4. (Table 4) Summing the benefits of the mortgage interest and real estate tax deductions with the capital gain exclusion yields the twelve-year benefit estimates shown in Table 5, which in most cases represent significant tax savings for the homeowner. (Table 5) Lifetime Income Growth One limitation of this approach for calculating the value of homeownership tax savings is that it assumes the homebuyer has a fixed income for the period in which they own the home. Clearly, this is not a reasonable assumption. This is important because while a homebuyer may have a relatively low income — and thus a relatively low marginal income tax rate — when purchasing a home, his/her income and tax rate is likely to grow as the homeowner ages and gains experience in his/her career. Assume the average annual income increase (at the taxpayer/homeowner level due to aging, as opposed to per capita increases for all workers) is 4%. (Table 6) Using this approach, re-estimating the five-year table estimated according to initial borrower income yields the larger values reported in Table 6. For example, the tax savings are higher in the $80,000 column in Table 6 than they are in Table 2, reflecting homeowners who enter a higher tax bracket in the fifth year of homeownership. Consider the graph in Figure 2 which reports the tax savings for a borrower with an initial income of $80,000 who obtains a home with a $250,000 mortgage. In year five, the cumulative savings from homeownership begin to diverge because the value of the homeownership tax incentives increases as the homeowner’s income increases, which is presumably correlated with the homeowner’s experience in the labor market. At the time of sale, the difference in this example is more than $11,000 in tax savings — all due to increases in the homeowner’s marginal tax rate. (Figure 2) This article has presented estimates of the financial benefits of homeownership. These savings total thousands of dollars for the period of ownership and are due to the deductibility of mortgage interest and real estate taxes, as well as the principal residence capital gain exclusion. The estimates in this paper account for the lost standard deduction that results when a taxpayer itemizes and thus reflect the incremental or true value of the housing tax incentives. An additional tax incentive that became available in 2009 is $8,000 first-time home buyer tax credit. Including the effects of this refundable credit increases the estimates in each of these tables on average by $8,000, which represents a significant increase in the tax savings of the first five years of homeownership. For example, for a homebuyer with an income of $70,000 who obtains a mortgage $200,000 the tax savings increase from $7,718 to $15,718 — an increase of 104%. Or as another example, a homebuyer with $80,000 in income and a $200,000 mortgage can expect his/her five-year tax savings estimate to increase 82% from $9,723 to $17,723. The combination of the standard tax benefits of homeownership combined with the temporary tax credit makes 2009 an attractive time period to purchase a home. For more information about this item, please contact: Robert Dietz at 800-368-5242 x8285 (rdietz@nahb.com) It should be noted that in this article “benefit” does not equate with “subsidy.” There are tax benefits for owners of rental housing as well, including interest and depreciation deductions, as well as the Low-Income Housing Tax Credit. Previous research on the home buying decision can be found There are other benefits not considered in this article, including the tax exemption for imputed-owner’s rent, the deduction for mortgage insurance, and the tax treatment of reverse mortgage Joint Committee on Taxation. 2008. Estimates of Federal Tax Expenditures for Fiscal Years 2008-2012. It is important to note that not all cash-out mortgage refinancing is classified as a home equity loan, in contrast to acquisition indebtedness that is subject to the larger $1 million cap. Provided the proceeds of a cash-out refinancing are used for home improvement or residential investment, such debt is not home equity debt but the more favorably treated acquisition indebtedness. The calculations in this paper do not include interactions with the AMT. For more information on real estate tax statistics, consult the following To the extent that these taxes are in fact fees assessed for a specific, targeted benefit to the home in question, such fees may not be deducted from taxable income. 5.86% is the 4 quarter average of 2008 from the Freddie Mac Primary Market Survey. 2004 American Housing Survey reported a 1.17% estimated average annual state and local rate residential property taxation. This analysis assumes real estate tax payments and all other itemized deductions increase with the rate of inflation. For more information about this item, please contact Robert Dietz at 800-368-5242 x8285 or via email at rdietz@nahb.org. Recommend This:
{"url":"http://www.nahb.org/generic.aspx?sectionID=734&genericContentID=113542&channelID=311","timestamp":"2014-04-18T00:24:20Z","content_type":null,"content_length":"51907","record_id":"<urn:uuid:c0b86497-6a2d-4f09-b8e8-593867337207>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Sunny Isles Beach, FL SAT Math Tutor Find a Sunny Isles Beach, FL SAT Math Tutor ...Teaching English to adults has been a rewarding and interesting experience and allowed me to expand my teaching methods and help the students to reach their potential in a second language. I have taught students from all Latin America and Russia. I have a yoga certification and have been teaching yoga since 2003. 16 Subjects: including SAT math, Spanish, chemistry, biology ...I have also taken course work in Microsoft Office programs such as Word, Excel, and PowerPoint. I understand the basic working of the computer, and can assist in troubleshooting many problems. I can install and set up most systems. 33 Subjects: including SAT math, reading, writing, geometry ...I show my student how a good result must be gotten progressively, however fast that process might be, and also how one result leads to another. Also I try to make my students Math-savvy, by providing them with multiple ways to reach a result. This allows the student to easily get out of troubl... 20 Subjects: including SAT math, reading, English, ESL/ESOL ...I've also been educated by Palmer Trinity School's Learning Specialist about ways to teach and adapt to special needs students and various learning differences. After noticing that every major paper I wrote as an undergraduate student was about religion and the supernatural, I realized that the ... 40 Subjects: including SAT math, reading, writing, biology I have a Mechanical Engineering degree from Florida International University. I have a lot of experience from working for Miami Dade college for 4 years, I worked in a high school as well for a year and now I am currently working at Barry University for the past 3 years. My approach to mathematics is that it required a lot of patience because everyone does not capture it the same. 8 Subjects: including SAT math, calculus, geometry, algebra 1 Related Sunny Isles Beach, FL Tutors Sunny Isles Beach, FL Accounting Tutors Sunny Isles Beach, FL ACT Tutors Sunny Isles Beach, FL Algebra Tutors Sunny Isles Beach, FL Algebra 2 Tutors Sunny Isles Beach, FL Calculus Tutors Sunny Isles Beach, FL Geometry Tutors Sunny Isles Beach, FL Math Tutors Sunny Isles Beach, FL Prealgebra Tutors Sunny Isles Beach, FL Precalculus Tutors Sunny Isles Beach, FL SAT Tutors Sunny Isles Beach, FL SAT Math Tutors Sunny Isles Beach, FL Science Tutors Sunny Isles Beach, FL Statistics Tutors Sunny Isles Beach, FL Trigonometry Tutors Nearby Cities With SAT math Tutor Aventura, FL SAT math Tutors Bay Harbor Islands, FL SAT math Tutors Biscayne Park, FL SAT math Tutors El Portal, FL SAT math Tutors Golden Beach, FL SAT math Tutors Hallandale SAT math Tutors Mia Shores, FL SAT math Tutors N Miami Beach, FL SAT math Tutors North Miami Bch, FL SAT math Tutors North Miami Beach SAT math Tutors North Miami, FL SAT math Tutors Ojus, FL SAT math Tutors Pembroke Park, FL SAT math Tutors Sunny Isles, FL SAT math Tutors West Park, FL SAT math Tutors
{"url":"http://www.purplemath.com/Sunny_Isles_Beach_FL_SAT_Math_tutors.php","timestamp":"2014-04-19T07:31:39Z","content_type":null,"content_length":"24701","record_id":"<urn:uuid:be1f8a5f-fe59-4003-a5cc-1ce55f70e5a8>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
Warn about possible truncation (or roundoff) errors. Most of these are related to integer arithmetic. By default, all warnings are turned on. This setting provides detailed control over the warnings about possible truncation errors. The list consists of keywords separated by commas or colons. Since all warnings are on by default, include a keyword prefixed by no- to turn off a particular warning. There are three special keywords: all to turn on all the warnings about truncation, none to turn them all off, and help to print the list of all the keywords with a brief explanation of each. If list is omitted, -truncation is equivalent to -truncation=all, and -notruncation is equivalent to -truncation=none. The warning keywords with their meanings are as follows: use of the result of integer division as an exponent. This suggests that a real quotient is intended. An example would be writing X**(1/3) to evaluate the cube root of X. The correct expression is X**(1./3.). Conversion of an expression involving an integer division to real. This suggests that a real quotient is intended. division in an integer constant expression that yields a result of zero. exponentiation of an integer by a negative integer (which yields zero unless the base integer is 1 in magnitude). This suggests that a real base is intended. automatic conversion of a lower precision quantity to one of higher precision. The loss of accuracy for real variables in this process is comparable to the corresponding demotion. No warning is given for promotion of integer quantities to real since this is ordinarily exact. use of a non-integer DO index in a loop with integer bounds. An integer DO index with real bounds is always warned about regardless of this setting. use of a non-integer array subscript. overspecifying a single precision constant. This may indicate that a double precision constant was intended. automatic conversion of a higher precision quantity to one of lower precision of the same type. This warning only occurs when an explicit size is used in declaring the type of one or both operands in an assignment. For example, a warning wil be issued where a REAL*8 variable is assigned to a REAL variable, if the default wordsize of 4 is in effect. A warning is also issued if a long integer is assigned to a shorter one, for example, if an INTEGER expression is assigned to an INTEGER*2 variable. There is one exception to this last case, namely if the right hand side of the assignment is a small literal constant (less than 128). type-demotion: automatic conversion of a higher precision quantity to one of lower precision of different type. This warning includes conversion of real quantities to integer, double precision to single precision real, and assignment of a longer character string to a shorter one. The warnings about promotion and demotion also apply to complex constants, considering the precision to be that of the real or imaginary part. Warnings about promotions and demotions are given only when the conversion is done automatically, e.g. in expressions of mixed precision or in an assignment statement. If intrinsic functions such as INT are used to perform the conversion, no warning is given. See also: -portability, -wordsize.
{"url":"http://www.dsm.fordham.edu/~ftnchek/html/truncation.html","timestamp":"2014-04-19T00:00:12Z","content_type":null,"content_length":"4497","record_id":"<urn:uuid:8bae3a78-8688-4162-9be4-f88e4df0755f>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem Solving Problem Solving in Prolog Reference: Bratko chapter 12 ┃Aim: ┃ ┃To illustrate search in AI using a fairly well-known example problem. We also briefly introduce a number of different methods for exploring a state space (or any other graph to be searched). ┃ ┃Keywords: breadth first search, depth first search, edge in a graph, goal state, graph, directed acyclic graphs, trees, binary trees, adjacency matrices, graph search algorithms, initial state, ┃ ┃node, operator, state, initial state, goal state, path, search, vertex ┃ ┃Plan: ┃ ┃ • states, operators and searching ┃ ┃ • representing state spaces using graphs ┃ ┃ • finding a path from A to B in a graph ┃ ┃ • missionaries & cannibals: representing states & operators ┃ ┃ • methods of search: depth-first, breadth-first ┃ Problem solving has traditionally been one of the key areas of concern for Artificial Intelligence. Below, we present a common problem and demonstrate a simple solution. Missionaries and Cannibals • There are three missionaries and three cannibals on the left bank of a river. • They wish to cross over to the right bank using a boat that can only carry two at a time. • The number of cannibals on either bank must never exceed the number of missionaries on the same bank, otherwise the missionaries will become the cannibals' dinner! • Plan a sequence of crossings that will take everyone safely accross. This kind of problem is often solved by a graph search method. We represent the problem as a set of which are snapshots of the world and which transform one state into another States can be mapped to nodes of a graph and operators are the edges of the graph. Before studying the missionaries and cannibals problem, we look at a simple graph search algorithm in Prolog. the missionaries and cannibals program will have the same basic structure. Graph Representation A graph may be represented by a set of edge predicates and a list of vertices. edge(1, 5). edge(1, 7). edge(2, 1). edge(2, 7). edge(3, 1). edge(3, 6). edge(4, 3). edge(4, 5). edge(5, 8). edge(6, 4). edge(6, 5). edge(7, 5). edge(8, 6). edge(8, 7). vertices([1, 2, 3, 4, 5, 6, 7, 8]). Finding a path • Write a program to find path from one node to another. • Must avoid cycles (i.e. going around in circle). • A template for the clause is: path(Start, Finish, Visited, Path). is the name of the starting node is the name of the finishing node is the list of nodes already visited. is the list of nodes on the path, including Start and Finish. The Path Program • The search for a path terminates when we have nowhere to go. path(Node, Node, _, [Node]). • A path from Start to Finish starts with a node, X, connected to Start followed by a path from X to Finish. path(Start, Finish, Visited, [Start | Path]) :- edge(Start, X), not(member(X, Visited)), path(X, Finish, [X | Visited], Path). Here is an example of the path algorithm in action. Representing the state Now we return to the problem of representing the missionaries anc cannibals problem: • A state is one "snapshot" in time. • For this problem, the only information we need to fully characterise the state is: □ the number of missionaries on the left bank, □ the number of cannibals on the left bank, □ the side the boat is on. All other information can be deduced from these three items. • In Prolog, the state can be represented by a 3-arity term, state(Missionaries, Cannibals, Side) Representing the Solution • The solution consists of a list of moves, e.g. [move(1, 1, right), move(2, 0, left)] • We take this to mean that 1 missionary and 1 cannibal moved to the right bank, then 2 missinaries moved to the left bank. • Like the graph search problem, we must avoid returning to a state we have visited before. • The visited list will have the form: [MostRecent_State | ListOfPreviousStates] Overview of Solution • We follow a simple graph search procedure: □ Start from an initial state □ Find a neighbouring state □ Check that the new state has not been visited before □ Find a path from the neighbour to the goal. The search terminates whe we have found the state: state(0, 0, right). Top-level Prolog Code % mandc(CurrentState, Visited, Path) mandc(state(0, 0, right), _, []). mandc(CurrentState, Visited, [Move | RestOfMoves]) :- newstate(CurrentState, NextState), not(member(NextState, Visited)), make_move(CurrentState, NextState, Move), mandc(NextState, [NextState | Visited], RestOfMoves]). make_move(state(M1, C1, left), state(M2, C2, right), move(M, C, right)) :- M is M1 - M2, C is C1 - C2. make_move(state(M1, C1, right), state(M2, C2, left), move(M, C, left)) :- M is M2 - M1, C is C2 - C1. Possible Moves • A move is characterised by the number of missionaries and the number of cannibals taken in the boat at one time. • Since the boat can carry no more than two people at once, the only possible combinations are: carry(2, 0). carry(1, 0). carry(1, 1). carry(0, 1). carry(0, 2). • where carry(M, C) means the boat will carry M missionaries and C cannibals on one trip. Feasible Moves • Once we have found a possible move, we have to confirm that it is feasible. • I.e. it is not feasible to move more missionaries or more cannibals than are present on one bank. • When the state is state(M1, C1, left) and we try carry(M, C) then M <= M1 and C <= C1 • must be true. • When the state is state(M1, C1, right) and we try carry(M, C) then M + M1 <= 3 and C + C1 <= 3 Legal Moves • Once we have found a feasible move, we must check that is legal. • I.e. no missionaries must be eaten. legal(X, X). legal(3, X). legal(0, X). • The only safe combinations are when there are equal numbers of missionaries and cannibals or all the missionaries are on one side. Generating the next state newstate(state(M1, C1, left), state(M2, C2, right)) :- carry(M, C), M <= M1, C <= C1, M2 is M1 - M, C2 is C1 - C, legal(M2, C2). newstate(state(M1, C1, right), state(M2, C2, left)) :- carry(M, C), M2 is M1 + M, C2 is C1 + C, M2 <= 3, C2 <= 3, legal(M2, C2). The complete code, with instructions for use, is available at http://www.cse.unsw.edu.au/~billw/cs9414/notes/mandc/mandc.pro Methods of Search In the preceding example, the state space is explored in an order determined by Prolog. In some situations, it might be necessary to alter that order of search in order to make search more efficient. To see what this might mean, here are two alternative methods of searching a tree. Depth first search begins by diving down as quickly as possible to the leaf nodes of the tree. Traversal can be done by: • visiting the node first, then its children (pre-order traversal): a b d h e i j c f k g • visiting the children first, then the node (post-order traversal): h d i j e b k f g c a • visiting some of the children, then the node, then the other children (in-order traversal) h d b i e j a f k c g There are many other search methods and variants on search methods. We do not have time to cover these in COMP9414, but you can find out about some of them in the text by Bratko. For example, chapter 12 deals with best-first search. ┃Summary: Problem Solving and Search in AI ┃ ┃We introduced the concepts of states and operators and gave a graph traversal algorithm that can be used as a problem solving tool. We applied this to solve the "missionaries and cannibals" ┃ ┃problem. ┃ ┃We also outlined depth-first search, breadth-first search, and alluded to the existence of a range of other search methods. ┃ CRICOS Provider Code No. 00098G Copyright (C) Bill Wilson, 2002, except where another source is acknowledged.
{"url":"http://www.cse.unsw.edu.au/~billw/cs9414/notes/mandc/mandc.html","timestamp":"2014-04-17T03:52:18Z","content_type":null,"content_length":"10631","record_id":"<urn:uuid:fe2b755c-92e6-45e0-98c6-68c8a66bab6b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: August 2012 [00392] [Date Index] [Thread Index] [Author Index] Re: Landau letter, Re: Mathematica as a New Approach... • To: mathgroup at smc.vnet.net • Subject: [mg127871] Re: Landau letter, Re: Mathematica as a New Approach... • From: John Doty <noqsiaerospace at gmail.com> • Date: Tue, 28 Aug 2012 04:53:42 -0400 (EDT) • Delivered-to: l-mathgroup@mail-archive0.wolfram.com • Delivered-to: l-mathgroup@wolfram.com • Delivered-to: mathgroup-newout@smc.vnet.net • Delivered-to: mathgroup-newsend@smc.vnet.net • References: <11671312.22333.1344322926771.JavaMail.root@m06> On Wednesday, August 15, 2012 1:31:53 AM UTC-6, Andrzej Kozlowski wrote: > I agree with your interpretation of Landau's letter but I also think your > remarks about mathematics miss the point of what mathematicians do. I know what mathematicians do. Finding connections between ideas is at the core of mathematics. The best mathematicians (von Neumann, for example)follow those connections wherever they lead, and don't stop at arbitrary borders. http://www.scientificamerican.com/article.cfm?id=rethinking-labels-boosts-creativity has some relevance here. > Mathematicians do not concern themselves with the physical universe - if > they did they would be something else. The results which they prove are > meaningful within their own realm. The exact nature of this "meaning" is > complicated, but it essentially relates to "procedures" (how arguments > are conducted) than any physical reality. A great deal of mathematics > (for example, almost all of probability theory) is concerned with > "infinity", which arguably has no physical meaning at all. Except that in many cases, it has been physical scientists who *introduced*mathematicians to various uses of infinity (differentials, Fourier analysis, delta functions, ...). But that's history. It is also clear from history that mathematics developed from very concrete foundations in things like counting and measurement. It is incomprehensible to me that many mathematicians wish to deny this, preferring to believe in Platonic fairy tales. A nasty consequence of this denial was the 1960's "New Math" curriculum for American schoolchildren. Supposed to strengthen math comprehension, it did exactly the opposite. I cringe when I hear a mathematician talk about Fourier analysis as being about functions in L2. That notion ignores out a large part of the application space: "carrier waves", "flicker noise", delta functions, ... Here we see mathematicians willfully avoiding *meaningful* infinity. > In 1910 the mathematician Oswald Veblen and the physicist > James Jeans were discussing the mathematics curriculum for physicists at > Princeton university. "We can safely omit group theory" argued Jeans, > "this theory will never have any significance for physics". Veblen > resisted and it is well known that this fact had a certain influence on > the future history of physics. > This example is, in fact, an excellent illustration of the main point > that people who argue like you do not get. Actually, this is a rather poor example for your argument. But first, to sh ow you that I'm partially on your side here, let me give you a better one. Non-Euclidean geometry was one of the great mathematical developments of the nineteenth century. It was driven entirely by the interests of mathematicians: it had no physical motivation. At the same time, the development of quaternions, their subsequent evolution into vector analysis, and the tremendously successful application of these developments to physics (especially electrodynamics) further entrenched three dimensional Euclidean space as *the* model for physical space. Then everything changed. Poincar=E9 and Minkowski reformulated Lorentz/Einstein special relativity as non-Euclidean geometry. Einstein then combined Minkowski's geometry with Riemann's, added some physics and came up with his general relativity. GR was such a huge intellectual leap that it seems inconceivable that he could have taken it without the foundation provided by "pure" mathematicians. Without that foundation, I don't think that even now, a century later, we'd have an adequate theory of gravity for astrophysics. But group theory? For half a century after the discussions you describe group theory had little influence on physics. Then, it made its big splash with Gell-Mann and Ne'eman's SU(3) theory of the hadron spectrum. But how much insight really emerged from group theory here? I recall Victor Weisskopf explaining the theory to a group of freshman (of which I was one). The gist was "the hadrons are the states of a spectroscopy, and they exhibit the patterns to be expected for a three particle spectroscopy". No group theory, all physics OK, you might say. The discovery passed through group theory to mechanism. Group theory was therefore important. The problem with this idea is that a number of other physicists were hot on the trail here, and there was no barrier to skipping straight to mechanism. Gell-Mann and Ne'eman got there first, and they happened to be unusually committed to mathematical abstraction, but a more concretely-minded physicist could have found the mechanism directly: it was not deeply hidden. The subsequent influence of the SU(3) abstraction on the development of this theory was negative. While Gell-Mann was certainly aware of the three-particle mechanism (he coined the term "quark"), he believed that mechanism was unnecessary. The trouble was that physical mechanisms have consequences beyond symmetry. In particular, if you hit a blob of particles with a probe of sufficiently small wavelength, you'll see that it's lumpy. And that's exactly what experiments revealed. Hadrons are not content-free consequences of SU(3) symmetry: they are composite objects, and the SU(3) symmetry is a consequence of their composition. This reveals the trouble with group theory here: it obfuscates the underlying physics. SU(3) could as easily represent the organizing principle behind somebody's stamp collection. The distinction between stamps and particles might not matter to mathematics, but it's a big deal in physics. But one good way to win a Nobel is to win the race. Gell-Mann and Ne'eman were the first ones to completely work out hadron spectroscopy: they won. A (to me unfortunate) consequence was that group theory has gained prominence in physics that goes far beyond its capacity for providing insight. For example, in place of Minkowski's clever geometry, the abstractionists now try to sell us the "Lorentz group". But the fact that Lorentz transforms form a group is trivial and unenlightening: it's the geometry that captures the physical essence here. > So one reason why existence and non-existence theorems are important is > that solving them leads to much deeper understanding of mathematics, > which in turn turns out often to involve unexpected applications in > other areas. But many presentations of theorems by mathematicians are unenlightening symbol manipulation. That seems to me to be at the core of Landau's complaint. > There is also another, more direct reason. Knowing that > there cannot be a general formula in radicals for the roots of a > polynomial equation means that we no longer need to try to find one and > instead can turn our attentions to other approaches. This is itself also > useful in applications (just this of the number of people who post to > this forum asking for "explicit" solutions of some equation or other). Indeed this is a very important result, but Galois theory itself is even less enlightening than other applications of group theory. > Finally, where on earth did you get the idea that "philosophers have > comprehensively demolished mathematical Platonism" or indeed that > philosophers have "comprehensively demolished" any philosophical idea in > the entire history of philosophy (including, of course, the idea of the > Creator)? This is an astounding news to not only to me, but also news to > my wife, who has been a professor of philosophy at one of the world's > leading universities, has a PhD in the subject from Oxford University, > etc, etc. It also would be of interest to physicists like Roger Penrose > who, obviously in blissful ignorance of this great news, remain > unabashedly "mathematical platonists". Penrose's Platonism is the source of his bizarre pseudophysical theory of how the mind works. To me, it is profoundly unscientific, based in faith in his subjective experience rather than objective evidence. > Could you please let us know the name of the philosophers who have > performed this amazing feat? There's a *lot* of literature here: I'm surprised you are unfamiliar with it. Let's start with this paper: You also might read "Philosophy of Mathematics (5 Questions)", edited by Hendricks and Leigeb, "18 Unconventional Essays on the Nature of Mathematics", edited by Hersh, and Hersh's book "What is Mathematics, Really?". There are a variety of good arguments against Platonism in the works above, but to me one seems especially unanswerable: mathematical Platonism requires that mathematicians possess a supernatural sense that connects them to an objective reality outside the physical world. There is neither any scientific evidence for this nor any explanation for what biological function such a sense would serve. On the other hand, I'm very impressed by N=FA=F1ez and Lakoff's idea that mathematics is a phenomenon that emerges naturally from sufficiently sophisticated embodied cognition. Based in actual science (experimental psychology), this is a very plausible approach to understanding the true nature of mathematics. • Follow-Ups:
{"url":"http://forums.wolfram.com/mathgroup/archive/2012/Aug/msg00392.html","timestamp":"2014-04-16T04:37:22Z","content_type":null,"content_length":"35330","record_id":"<urn:uuid:66668d34-d8d3-410f-b8d8-7f33c0e6a2b3>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
Nuclear Waste Disposal into Sun or outside Solar System? Also, the post about just letting the craft decay is incorrect. The craft will still remain in orbit around the Sun trailing Earth. To get it to decay, you would have to slow it down to 0. Since it is following Earth in it's orbit, its speed must be the same as that of the Earth for that orbital radius around the Sun which means that your delta v would have to be the negative of Earth's velocity which is HUGE! That would definitely not be the minimum delta v. One thing to remember about why delta v is so important is that the craft will have to carry 7-9 times the amount old compared to cargo depending on specific impulse. Also, I am not worrying too much about which one is lesser right now. Either way, I have to solve both and then simply find out so which one is lesser is actually of little relevance in the end as the motivation for the project is simply to design the orbits and perform the calculations. In the end, the conclusion would simply compare the 2 different delta v's and make a recommendation. I am still very interested in Janus's numbers! Was it really that easy to calculate? Also, I apologize for not quoting. I'm a newbie here. Orbital velocity is found by [tex] V_o = \sqrt{\frac{GM}{r}}[/tex] where M is the mass you are orbiting (in this case the Sun) and r is the radius of your orbit (assuming a circular orbit) Escape velocity is found by [tex] V_e = \sqrt{\frac{2GM}{r}}[/tex] Note that the only difference is the 2. Thus escape velocity is 1.414... times greater than the orbital velocity at any given distance from the Sun. In order to hit the Sun, you have to, as explained by DH, put the rocket in a orbit that grazes the Sun. To work out what it talks to do this, you use the vis-viva equation: [tex]V = \sqrt {Gm \left ( \frac{2}{r}- \frac{1}{a} \right )}[/tex] Here, a is the semi-major axis, which is found by taking the Sum of the radius of the Sun and the radius of the Earth's orbit (r) and dividing it in half. This will give you the velocity that you would have to slow the rocket to in order to have it skim the Sun's surface.
{"url":"http://www.physicsforums.com/showthread.php?p=4199777","timestamp":"2014-04-17T18:34:30Z","content_type":null,"content_length":"53174","record_id":"<urn:uuid:a029ff61-75e1-46ad-83a6-ed41e61216f0>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
Relationship between seminormed, normed, spaces and Kolmogrov top. spaces I am having trouble with a result in my text left as an exercise. Let (X, τ) be a semi-normed topological space: norm(0) = 0 norm(a * x) = abs(a) * norm(x) norm( x + y) <= norm(x) + norm(y) My text states that X is a normed vector space if and only if X is Kolmogrov. It claims it to be trivial and leaves it as an exercise. I'm able to get one direction, but I'm not very happy with it. (=> direction) Assume X is normed, let x,y be in X, such that x != y. Then d(x,y) > 0. I am tempted to use an open ball argument, claiming that there exists an open ball about x which cannot contain y, but then how do I relate this notion of open balls in a metric space to the open set required by the Kolmogrov (distinguishable) condition? In terms of the other direction, I am entirely lost (Kolmogrov and semi-normed imples normed). Can anyone provide some insight, I just took a course in Real Analysis last semester and I did quite well, and I'm having trouble generalizing my insights now to topological spaces. With the semi-normed space we lose discernability, and I have a hunch that the Kolmogrov condition patches that problem, but I just can't get there.
{"url":"http://www.physicsforums.com/showthread.php?p=4238203","timestamp":"2014-04-16T22:04:04Z","content_type":null,"content_length":"26304","record_id":"<urn:uuid:f28ed50c-37b1-433b-9bce-e7835c2555ac>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Projectile Motion Date: 02/27/2001 at 22:20:45 From: D. Gilbert Subject: Setting up quadratic equations If a cannon is firing a projectile with an initial upward velocity of 100 feet per second, how do you set up a quadratic equation for the projectile motion of the cannonball it is firing? I know how to solve quadratic equations, but never thought about setting one up. Thank you for your help. Date: 02/28/2001 at 10:57:08 From: Doctor Ian Subject: Re: Setting up quadratic equations Starting from first principles, we have Newton's law: a = F/m We can integrate acceleration to get velocity: v = (F/m)t + v_i And we can integrate velocity to get (the vertical) position: p = (1/2)(F/m)t^2 + (v_i)t + p_i In this equation, (F/m) is the familiar gravitational acceleration, g; v_i is the initial velocity; and p_i is the initial position. The problem tells you v_i: it's 100 feet per second. You can choose p_i to be whatever you want, although it's normal to set the location from which the motion begins as p = 0. And g is 32 ft/sec^2 in a _downward_ direction. If you're not familiar with integration, don't worry about it. I just did that to show you where the equations came from. In most problems of this type, 'setting up' just means choosing the right equation, p = (1/2)gt^2 + (v_i)t + p_i and going from there. Anyway, so now you end up with p = 100t - (1/2)(32)t^2 If you choose some height above the ground, you can solve for the time t that it takes to reach that height. (Note that there will be two solutions everywhere except at the top of the arc, since what goes up must come back down.) Or, if you choose some time of flight, you can determine the height of the cannonball at that time. Note that as time increases, the height eventually becomes negative, which makes sense if you think of launching the cannonball from the edge of a cliff: o o o o p > 0 o o ------- o p = 0 o p < 0 I hope this helps. Write back if you'd like to talk about this some more, or if you have any other questions. - Doctor Ian, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/56348.html","timestamp":"2014-04-20T01:52:00Z","content_type":null,"content_length":"7276","record_id":"<urn:uuid:e4767645-9dd3-4dd3-b530-5458567975bb>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
An intuitionistic theory of types: predicative part Results 11 - 20 of 119 , 1998 "... We investigate semantical normalization proofs for typed combinatory logic and weak -calculus. One builds a model and a function `quote' which inverts the interpretation function. A normalization function is then obtained by composing quote with the interpretation function. Our models are just like ..." Cited by 44 (7 self) Add to MetaCart We investigate semantical normalization proofs for typed combinatory logic and weak -calculus. One builds a model and a function `quote' which inverts the interpretation function. A normalization function is then obtained by composing quote with the interpretation function. Our models are just like the intended model, except that the function space includes a syntactic component as well as a semantic one. We call this a `glued' model because of its similarity with the glueing construction in category theory. Other basic type constructors are interpreted as in the intended model. In this way we can also treat inductively defined types such as natural numbers and Brouwer ordinals. We also discuss how to formalize -terms, and show how one model construction can be used to yield normalization proofs for two different typed -calculi -- one with explicit and one with implicit substitution. The proofs are formalized using Martin-Lof's type theory as a meta language and mechanized using the A... - Semantics and Logics of Computation , 1997 "... ion is written as [x: oe]M instead of x: oe:M and application is written M(N) instead of App [x:oe] (M; N ). 1 Iterated abstractions and applications are written [x 1 : oe 1 ; : : : ; x n : oe n ]M and M(N 1 ; : : : ; N n ), respectively. The lacking type information can be inferred. The universe ..." Cited by 40 (4 self) Add to MetaCart ion is written as [x: oe]M instead of x: oe:M and application is written M(N) instead of App [x:oe] (M; N ). 1 Iterated abstractions and applications are written [x 1 : oe 1 ; : : : ; x n : oe n ]M and M(N 1 ; : : : ; N n ), respectively. The lacking type information can be inferred. The universe is written Set instead of U . The El-operator is omitted. For example the \Pi-type is described by the following constant and equality declarations (understood in every valid context): ` \Pi : (oe: Set; : (oe)Set)Set ` App : (oe: Set; : (oe)Set; m: \Pi(oe; ); n: oe) (m) ` : (oe: Set; : (oe)Set; m: (x: oe) (x))\Pi(oe; ) oe: Set; : (oe)Set; m: (x: oe) (x); n: oe ` App(oe; ; (oe; ; m); n) = m(n) Notice, how terms with free variables are represented as framework abstractions (in the type of ) and how substitution is represented as framework application (in the type of App and in the equation). In this way the burden of dealing correctly with variables, substitution, and binding is s... - 191 – 204 , 1998 "... The notion of a universe of types was introduced into constructive type theory by Martin-Löf (1975). According to the propositions-as-types principle inherent in ..." Cited by 32 (8 self) Add to MetaCart The notion of a universe of types was introduced into constructive type theory by Martin-Löf (1975). According to the propositions-as-types principle inherent in , 1993 "... This thesis contains an investigation of Coquand's Calculus of Constructions, a basic impredicative Type Theory. We review syntactic properties of the calculus, in particular decidability of equality and type-checking, based on the equality-as-judgement presentation. We present a set-theoretic notio ..." Cited by 31 (2 self) Add to MetaCart This thesis contains an investigation of Coquand's Calculus of Constructions, a basic impredicative Type Theory. We review syntactic properties of the calculus, in particular decidability of equality and type-checking, based on the equality-as-judgement presentation. We present a set-theoretic notion of model, CC-structures, and use this to give a new strong normalization proof based on a modification of the realizability interpretation. An extension of the core calculus by inductive types is investigated and we show, using the example of infinite trees, how the realizability semantics and the strong normalization argument can be extended to non-algebraic inductive types. We emphasize that our interpretation is sound for large eliminations, e.g. allows the definition of sets by recursion. Finally we apply the extended calculus to a non-trivial problem: the formalization of the strong normalization argument for Girard's System F. This formal proof has been developed and checked using the... - Proceedings of CSL '93, LNCS 832 , 1999 "... this paper is similar to the one in [2]. In this paper they define a normalization function for simply typed ..." - In Foundations of Object Oriented Languages 3 , 1996 "... In this paper we present an extension to basic type theory to allow a uniform construction of abstract data types (ADTs) having many of the properties of objects, including abstraction, subtyping, and inheritance. The extension relies on allowing type dependencies for function types to range over ..." Cited by 29 (8 self) Add to MetaCart In this paper we present an extension to basic type theory to allow a uniform construction of abstract data types (ADTs) having many of the properties of objects, including abstraction, subtyping, and inheritance. The extension relies on allowing type dependencies for function types to range over a well-founded domain. Using the propositions--as--types correspondence, abstract data types can be identified with logical theories, and proofs of the theories are the objects that inhabit the corresponding ADT. 1 Introduction In the past decade, there has been considerable progress in developing formal account of a theory of objects. One property of object oriented languages that make them popular is that they attack the problem of scale: all object oriented languages provide mechanisms for providing software modularity and reuse. In addition, the mechanisms are intuitive enough to be followed easily by novice programmers. During the same decade, the body of formal mathematics has be... - Annals of Pure and Applied Logic , 2003 "... 1 Introduction Induction-recursion is a powerful definition method in intuitionistic type theory in the sense of Scott ("Constructive Validity") [31] and Martin-L"of [17, 18, 19]. The first occurrence of formal induction-recursion is Martin-L"of's definition of a universe `a la T ..." Cited by 28 (11 self) Add to MetaCart 1 Introduction Induction-recursion is a powerful definition method in intuitionistic type theory in the sense of Scott (&quot;Constructive Validity&quot;) [31] and Martin-L&quot;of [17, 18, 19]. The first occurrence of formal induction-recursion is Martin-L&quot;of's definition of a universe `a la Tarski [19], which consists of a set U , 1991 "... Various formulations of constructive type theories have been proposed to serve as the basis for machine-assisted proof and as a theoretical basis for studying programming languages. Many of these calculi include a cumulative hierarchy of "universes," each a type of types closed under a collectio ..." Cited by 24 (6 self) Add to MetaCart Various formulations of constructive type theories have been proposed to serve as the basis for machine-assisted proof and as a theoretical basis for studying programming languages. Many of these calculi include a cumulative hierarchy of "universes," each a type of types closed under a collection of type-forming operations. Universes are of interest for a variety of reasons, some philosophical (predicative vs. impredicative type theories), some theoretical (limitations on the closure properties of type theories), and some practical (to achieve some of the advantages of a type of all types without sacrificing consistency.) The Generalized Calculus of Constructions (CC ! ) is a formal theory of types that includes such a hierarchy of universes. Although essential to the formalization of constructive mathematics, universes are tedious to use in practice, for one is required to make specific choices of universe levels and to ensure that all choices are consistent. In this pa... , 1992 "... This thesis considers the problem of program correctness within a rich theory of dependent types, the Extended Calculus of Constructions (ECC). This system contains a powerful programming language of higher-order primitive recursion and higher-order intuitionistic logic. It is supported by Pollack's ..." Cited by 24 (1 self) Add to MetaCart This thesis considers the problem of program correctness within a rich theory of dependent types, the Extended Calculus of Constructions (ECC). This system contains a powerful programming language of higher-order primitive recursion and higher-order intuitionistic logic. It is supported by Pollack's versatile LEGO implementation, which I use extensively to develop the mathematical constructions studied here. I systematically investigate Burstall's notion of deliverable, that is, a program paired with a proof of correctness. This approach separates the concerns of programming and logic, since I want a simple program extraction mechanism. The \Sigma-types of the calculus enable us to achieve this. There are many similarities with the subset interpretation of Martin-Lof type theory. I show that deliverables have a rich categorical structure, so that correctness proofs may be decomposed in a principled way. The categorical combinators which I define in the system package up much logical bo... , 1995 "... Introduction We present a categorical proof of the normalization theorem for simply typed -calculus, i.e. we derive a computable function nf which assigns to every typed -term a normal form, s.t. M ' N nf(M ) = nf(N ) nf(M ) ' M where ' is fij equality. Both the function nf and its correctness ..." Cited by 23 (5 self) Add to MetaCart Introduction We present a categorical proof of the normalization theorem for simply typed -calculus, i.e. we derive a computable function nf which assigns to every typed -term a normal form, s.t. M ' N nf(M ) = nf(N ) nf(M ) ' M where ' is fij equality. Both the function nf and its correctness properties can be deduced from the categorical construction. To substantiate this, we present an ML program in the appendix which can be extracted from our argument. We emphasize that this presentation of normalization is reduction free, i.e. we do not mention term rewriting or use properties of term rewriting systems such as the Church-Rosser property. An immediate consequence of normalization is the decidability of ' but there are other useful corollaries; for instance we can show that
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=704713&sort=cite&start=10","timestamp":"2014-04-21T05:38:34Z","content_type":null,"content_length":"36704","record_id":"<urn:uuid:f5fc38a1-9fea-48c7-a907-15484d60bbf7>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework 6 (Spring 2008) MCS-388 Homework 6 (Spring 2008) Due: May 9, 2008 All questions in this assignment concern applying the lazy code motion partial-redundancy elimination algorithm described in Section 9.5 of the dragon book to the flow graph shown at the end of this assignment. All of the questions except the last one ask you to give a set of block numbers. Please list the block numbers in numerical order. If you want to list all the numbers 0 to 15 in order and then cross out the ones that don't belong in the set, that is fine too. And if you want to present all the sets in a combined table, that is fine too. 1. At which blocks is the expression a+b anticipated-in? 2. At which blocks is the expression a+b "available"-in, using Section 9.5's peculiar definition of "available"? 3. Which blocks are earliest for the expression a+b? 4. At which blocks is the expression a+b postponable-in? 5. Which blocks are latest for the expression a+b? 6. For each block that would be altered by Algorithm 9.36, give the block number and what the new contents of the block would be. To answer this question using the algorithm, you would need to first do one more dataflow analysis (the one named "used"). If you want, you can do that analysis. Or, you can just use your understanding of the algorithm's goal to figure out the end result. Course web site: http://www.gac.edu/~max/courses/S2008/MCS-388/ Instructor: Max Hailperin <max@gustavus.edu>
{"url":"https://gustavus.edu/+max/courses/S2008/MCS-388/homeworks/hw6.html","timestamp":"2014-04-18T00:40:31Z","content_type":null,"content_length":"2422","record_id":"<urn:uuid:344e52bb-24bc-4c87-86fa-be6ef30d9a16>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
Scaling up question August 22nd 2007, 01:16 PM Scaling up question Hi, wonder if anyone can help me with a question that has been bugging me! A recipe for a 7.5 inch square-based iced cake is scaled up the cake to make a cake of the same shape but with a 10 inch square-based cake. How could i work out roughly: 1. Volume of 10 inch over 7.5 inch 2. How much more flour is needed 3. How much more icing is needed Thanks for any help! August 22nd 2007, 01:25 PM Hi, wonder if anyone can help me with a question that has been bugging me! A recipe for a 7.5 inch square-based iced cake is scaled up the cake to make a cake of the same shape but with a 10 inch square-based cake. How could i work out roughly: 1. Volume of 10 inch over 7.5 inch 2. How much more flour is needed 3. How much more icing is needed Thanks for any help! Volume scales as the cube of a linear dimension, so the volume is (10/7.5)^3 times that of the 7.5" cake. Amount of flour needed scales as the volume, so (10/7.5)^3 times as much flour is required. The amount of icing scales as the surface area which scales as the square of a linear dimension so (10/7.5)^2 times as much iceing is required. August 23rd 2007, 02:08 AM Excellent. thank you very much!
{"url":"http://mathhelpforum.com/math-topics/17968-scaling-up-question-print.html","timestamp":"2014-04-21T06:54:45Z","content_type":null,"content_length":"5224","record_id":"<urn:uuid:9caddd89-2de4-4e71-9ea3-72bd9cc5ec37>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
Second order partial derivatives, the Taylor series and the degenerate case. October 22nd 2011, 07:14 AM #1 Oct 2011 Second order partial derivatives, the Taylor series and the degenerate case. I am wondering if someone could let me know if my understanding is right or wrong. The Taylor series gives the function in the form of a sum of an infinite series. From this an approximation of the change in the function can be derived: $f_{a}$ and $f_{a,a}$ are the first and second partial derivatives of a, respectively. $\Delta f(x,y) = f_{x}(x-x_{0}) + f_{y}(y-y_{0}) + \frac{1}{2}f_{x,x}(x-x_{0})^2 + \frac{1}{2}f_{y,y}(y-y_{0})^2 + f_{x,y}(y-y_{0})(x-x_{0}) + ...$ The first few terms in the Taylor series (those that appear above), can be used to make an approximation of the change in function. At a critical point: $f_{x} and f_{y} = 0$ And so we are left with: $\Delta f(x,y) = \frac{1}{2}f_{x,x}(x-x_{0})^2 + \frac{1}{2}f_{y,y}(y-y_{0})^2 + f_{x,y}(y-y_{0})(x-x_{0})$ Which is a quadratic approximation of the change in the function at a critical point. For me, its easier to understand what is happening by factoring out y: $(y-y_{0})^2 (\frac{1}{2}f_{x,x}\frac{(x-x_{0})^2}{(y-y_{0})^2} + \frac{1}{2}f_{y,y} + f_{x,y}\frac{(x-x_{0})}{(y-y_{0})})$ From this, the discriminant can be used to determine local min, local max, saddle etc properties. If the discriminant, $f_{x,y})^2 - 4\frac{1}{2}f_{x,x}\frac{1}{2}f_{y,y}$, equals zero. Then it is not possible to tell what is happening. This is because higher order terms are then important. Edit: realised attempt at an explanation is wrong. Thanks in advance. Last edited by hachataltoolimhakova; October 22nd 2011 at 01:54 PM. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/191014-second-order-partial-derivatives-taylor-series-degenerate-case.html","timestamp":"2014-04-16T07:34:28Z","content_type":null,"content_length":"32624","record_id":"<urn:uuid:8eb2c6c3-3329-4639-aaa9-ee8b270bd14d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem Solving Strategies 1.8: Problem Solving Strategies Created by: CK-12 The Four Thousand Footers While at Galehead Hut, Kelly found a book on the different mountains in the Presidential Range. She was fascinated to learn that so many of them were above 4,000 feet in elevation. Laurel, one of the leaders, saw her reading it and came over to her. “Pretty interesting, huh?” she asked Kelly. “Yes. I had no idea.” “Well if you continue hiking maybe you’ll become part of the Four Thousand Footers Club,” Laurel said. “What is that?” Kelly asked. “That is a group that climbs all of the peaks above 4,000 feet. There are 48 of them.” Wow! Kelly couldn’t believe it. If each peak was at least 4,000 feet, that would be quite a collection. Kelly looked in the book again and found a whole chapter on the Four Thousand Footers Club. She was fascinated. She wrote down the following mountains in her journal. Washington 6,288 ft. Adams 5,774 ft Jefferson 5,712 ft. Monroe 5,384 ft. Madison 5,367 ft. Lafayette 5,260 ft. Lincoln 5,089 ft. If she climbed each of these peaks, how many feet would that be in all? If each peak took two days average to climb, how many days would it take her to climb them all? Use your problem solving to help Kelly figure this out. You will solve this problem by the end of the lesson. What You Will Learn In this lesson you will learn the following skills: • Read and understand given problem situations. • Develop and use a variety of strategies. • Plan and compare alternative approaches to solving problems. • Solve real-world problems using selected strategies as part of a plan. Teaching Time I. Read and Understand Given Problem Situations Taking the time to read and understand a problem is the key to finding a solution. Unfortunately, it is the part that students often rush through and then end up making mistakes or becoming confused. You can avoid that problem yourself by using the questions that you learned in the last section. Remember those? Also, keep in mind that sometimes there will be extra information in a problem deliberately put there to throw you off. Pay close attention as you read and don’t be fooled!! Let’s look at an example that has extra information in it. Ron arranged his herb garden in rows, with the following number of plants in each row: 2 plants, 5 plants, 11 plants, 23 plants The garden has an area of $25 \ yd^2$ How many plants will be in the fifth row? Whenever you see a series of numbers, this should alert you to use the "find a pattern" strategy. This problem is asking us to identify a pattern rule and use that pattern rule to find how many plants will be in the fifth row. The pattern rule is $2x + 1$ The question has nothing to do with the area of the garden, so we can ignore that information. Make a note to in your notebook that when you see a series of numbers, you need to look for a pattern. Melissa has 72 cookies she wants to put as evenly as possible into 7 gift bags. There are half as many chocolate chip cookies as peanut butter cookies. How many cookies will be left over after she puts them in the bags? When we see the phrase “how many will be left over, we can guess the problem might be asking about remainders and that to solve this problem we will need to divide. We don’t care about the numbers of different types of cookies, so we can ignore that information. $72 \div 7 = 10r2$ There will be two cookies left over. II. Develop and Use a Variety of Strategies There are a variety of strategies that you could select from when working on a problem, such as: find a pattern, guess and check, work backward, draw a picture, write an equation, and use a formula. The more you practice solving problems, the quicker you will become at identifying the most appropriate strategies to use when solving specific types of problems. Here are some hints to aid in selecting the best strategy: 1. Find a Pattern - best used when there is a series of numbers and/or when you are being asked for a later quantity. For example, find the number in the tenth step. 2. Guess and Check - best used when you are looking for one or two numbers and you think one of them might work. You can take a guess, try out a number and then adjust your answer from there. 3. Work Backwards - think about the problems that you had earlier in this chapter when you were given the area or perimeter and you needed to find a side length. Working backwards is very helpful for problems like these where an answer of some sort is given right away. 4. Draw a Picture - look for examples that have some kind of visual in them; problems with geometric shapes work best with the drawing a picture strategy. 5. Write an Equation - writing an equation is great when there is a missing quantity that needs to be figured out. 6. Use a Formula - Formulas are helpful for area and perimeter problems. You will also encounter other formulas as you work through this book and those can be applied when problem solving as well. Take a few notes before continuing in this lesson. Be sure that you understand all of the different strategies and when to use each one. IV. Plan and Compare Alternative Approaches to Solving Problems There are many different ways to solve a problem and still get the correct answer. Strategies are designed to help make problem-solving faster. Working through lots of problems, you will find the approaches and strategies that work best for you. Look back at the notes you wrote in the last section. When selecting a strategy, those will be helpful. Also, remember that sometimes more than one can be used. For example, a problem with an unknown could be solved by an equation, but if it has a series of numbers, it could also be solved by looking for a pattern. Let’s practice. Ms. Powell wants to hang a large tapestry lengthwise on her living room wall. The tapestry has a perimeter of 42 ft and a width of 9 ft. Ms. Powell’s wall is 10 ft high. Will the length of the tapestry fit the height of Ms. Powell’s wall without hitting the ceiling? This is a multi-step problem requiring a variety of different strategies to solve. To start off, it might help to draw a picture to get a feel for what the problem is asking. The problem asks if the tapestry will fit. We are going to have to find the length of the tapestry and compare it to the height of Ms. Powell’s wall. Because the question involves perimeter, we know we are going to have to use the formula for perimeter. $P = 2l + 2w$ $P &= 2l + 2w.\\42 &= 2l + 2(9)\\42 &= 2l + 18\\24 &= 2l\\l &= 12 \ feet$ Now we need to return to the problem and compare the length of the tapestry with the height of the wall. The tapestry is 12 feet long; the ceiling is only 10 feet high. The tapestry won’t fit! An alternative approach to solving this problem would have been to only look at a drawing. We could have drawn each piece of the problem and then compared. Looking at the dimensions, you would have been able to see that the tapestry would not fit. Let's look at another example: The band leader has arranged 56 musicians that will be participating in a parade. The number of players he placed in each row was 10 more than the number of rows. How many rows were there? To approach this problem, we need to assume that the musicians are arranged in equal rows. Then we know we are going to have to look for factors of 56 which fit these qualifications. We can work to “guess and check” for numbers that work. The guessing and checking strategy uses $7 \times 8 = 56$$28 \times 2 = 56$$14 \times 4 = 56$ Guessing and checking isn’t the only way to solve this problem. You might also choose to draw a picture or make an organized list. Real Life Example Completed The Four Thousand Footers Here is the original problem once again. Reread it and underline any important information. While at Galehead Hut, Kelly found a book on the different mountains in the Presidential Range. She was fascinated to learn that so many of them were above 4,000 feet in elevation. Laurel, one of the leaders, saw her reading it and came over to her. “Pretty interesting, huh?” she asked Kelly. “Yes. I had no idea.” “Well if you continue hiking maybe you’ll become part of the Four Thousand Footers Club,” Laurel said. “What is that?” Kelly asked. “That is a group that climbs all of the peaks above 4,000 feet. There are 48 of them.” Wow! Kelly couldn’t believe it. If each peak was at least 4,000 feet, that would be quite a collection. Kelly looked in the book again and found a whole chapter on the Four Thousand Footers Club. She was fascinated. She wrote down the following mountains in her journal. Washington 6,288 ft. Adams 5,774 ft Jefferson 5,712 ft. Monroe 5,384 ft. Madison 5,367 ft. Lafayette 5,260 ft. Lincoln 5089 ft. If she climbed each of these peaks, how many feet would that be in all? If each peak took two days average to climb, how many days would it take her to climb them all? First, let’s find the sum of all of the elevations in Kelly’s list. $6288 + 5774 + 5712 + 5384 + 5367 + 5260 + 5089 = 38874 \ feet$ Keep in mind that this is only one way. Kelly still has to hike back down. You will need to double your number to accurately represent the total feet hiked for these mountains. How many miles is that? 77,748 feet $\div$ Now if it were to take 2 days for each peak, how many days would it take Kelly to climb these mountains on her list? We can write an equation. 7 mountains 2 days $7 \times 2 = 14 \ days$ At Galehead Hut, Kelly looked out at the views of the mountains and was glad to be a part of the summer teen adventure program. While she was tired, she was also satisfied and proud of what she had accomplished so far. She was excited to think about what the next adventure would be! Time to Practice Directions: Use what you have learned to solve the following problems. 1. Mary went to the music store with her babysitting money. She bought two CDs for $12.50 each and two magazines for $4.25 each. She left the store with $10.25. How much money did she start with? 2. Since he began his fitness routine, Mr. Trigg has measured his weight every week. His weights for the first six weeks are as follows: 236, 230, 232, 226, 228, 222. If the pattern continues, how much will he weigh in the tenth week? 3. The area of City Park is $75 \ km^2$ 4. A farmer planted corn, wheat, and cotton in a total of 88 fields. He planted twice as many fields in corn than in wheat and half as many in cotton than in corn. How many fields did he plant of 5. Mrs. Whitaker is mailing a pair of shoes to her daughter. She wants to fit the rectangular shoebox inside a larger square box. The area of the base of the shoebox is $84 \ in.^2$ 6. After a pinball game, the score board showed that the combined points of Peter, Ella, and Ned is 728. Ella scored half the points of Ned, and Peter scored one-fourth the points of Ned. How many points did each player score? 7. Tami made a total of $47 babysitting on New Year’s Eve. She made her hourly rate plus a $7 tip. If she worked 5 hours, what is her hourly rate? 8. A weightlifter lifts weights in the following order: 0.5lb, 1.5lb, 4.5lb, 13.5lb. How many pounds will he lift next? 9. Figure A is a square with a side that measures 9 cm. Figure B is a square with a side that measures 6 cm. Which figure has the greater area, Figure A or Figure B? 10. Mr. and Mrs. Rowe are driving 959 miles to have a vacation on the beach. They want to split the driving distance over 4 days, driving the exact same amount on the first three days and the remainder on the fourth day. If they drive 119 miles on the fourth day, how many miles will they drive on the first day? 11. Cedric spent $27.75 on pizza for his friends. Each cheese pizza cost $8 and each extra topping cost $0.75. If Cedric bought 3 cheese pizzas, how many extra toppings did he get? 12. At the trading fair, Chi Wong arranged 72 baseball cards in rows on the trading table. Each row had 14 more cards than the number of rows. How many cards were in each row? Take a few minutes to check the strategies you used with a peer. Did you use the same strategies or different strategies in problem solving? Files can only be attached to the latest version of None
{"url":"http://www.ck12.org/book/CK-12-Middle-School-Math---Grade-7/r3/section/1.8/anchor-content","timestamp":"2014-04-20T23:48:08Z","content_type":null,"content_length":"122394","record_id":"<urn:uuid:5752786a-81a4-4bee-bee9-9898423dfcfc>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
determinants and polynomials in matrices up vote 3 down vote favorite Muirhead (1982, "Aspects of Multivariate Statistical Theory") references on page 59 a result (from MacDuffee, 1943, chap 3, "Vectors and Matrices") a book I cannot find): " The only polynomials in the elements of a matrix satisfying $p(I)=1$ and $p(AB)=p(A)p(B)$ for all matrices, are the integer powers of det B: $p(B) = (\det B)^k$ for some integer $k$. Where otherwise, can I find this result and discussions of it? reference-request linear-algebra I would check Emil Artin's "Geometric Algebra". – J. Martel Jun 7 '12 at 0:56 For discussions of it, I highly recommend the following three threads on the AoPS forum: artofproblemsolving.com/Forum/viewtopic.php?f=349&t=43292 artofproblemsolving.com/Forum/viewtopic.php?f=349& t=218429 artofproblemsolving.com/Forum/viewtopic.php?f=349&t=279631 – Daniel m3 Jun 7 '12 at 22:07 MacDuffee's book should be available here: libgen.info/view.php?id=4344 – Pasha Zusmanovich May 10 '13 at 14:51 add comment 4 Answers active oldest votes S. Cater proved that every $\mathbb F$ valued map $f$ on square matrices which satisfies $f(ABC)=f(CBA)$ can be written as $f(X)=\pi(\det(X))$ for a unique map $\pi:\mathbb F\to \ mathbb F$. Also $f$ is multiplicative iff $\pi$ is multiplicative. up vote 9 down When you assume $\mathbb F=\mathbb R$ and $f$ is continuous, for example, then a continuous multiplicative $\pi$ is of the form $x^{r}$ for some $r$. And the result you quote is a vote accepted corollary,(because polynomials are continuous). This general result is proved in "Scalar valued mappings of squared matrices". The author in the linked paper is "S. Cater" – Matt Young Jun 7 '12 at 19:46 Thanks! Typo fixed. – Gjergji Zaimi Jun 7 '12 at 21:50 add comment Using some group theory, the result can be easily generalized as follows: If $R$ is an infinite commutative ring such that $SL_n(R)$ is perfect (i.e. $SL_n(R)$ is its own commutator) then the only polynomial functions $p: R[x_{11},...,x_{nn}] \to R$ satisfying the required identities are $p(X) = \det(X)^n$ for some $n\ge 0$. up vote Examples for $R$ are all (infinite) local rings (in particular fields) and principal ideal domains. 7 down vote Proof: $p$ induces a group homomorphism $p: GL_n(R) \to R^\times$ those kernel contains the commutator subgroup. Since $SL_n(R)$ is perfect, $SL_n(R) \le \ker(p)$. Define a group hom. $$f: R ^\times \to R^\times,\; x \mapsto p\big(\operatorname{diag}(x,1,...,1)\big).$$ If $A \in GL_n(R)$, set $B := \operatorname{diag}(\det(A),1,...,1)$. Then $AB^{-1} \in SL_n(R)$ implies $$p(A)= p(B)=f(\det(A)).\hspace{70pt}(\ast)$$ Since $p$ is a polynomial function, $f(x)$ is a polynomial function in $x$ satisfying $$f(xy)=f(x)f(y).\hspace{110pt}(\ast\ast)$$ As $R$ is infinite, it's easy to see that the only polynomial functions with $(\ast\ast)$ are $f(x)=x^n$. Now the result follows from $(\ast)$. q.e.d. add comment Francois Ziegler's answer is not massive overkill. The proof is simple. Suppose you have a continuous multiplicative mapping $P: \operatorname\{Mat\}_n(\mathbb R) \to (\mathbb R, \cdot)$ as you started with, then it restricts to a continuous group homomorphism up vote $P:GL(n)\to (\mathbb R\setminus\{0\}, \cdot)$, which is analytic (using $\exp$). Its derivative at $\mathbb I_n$ is a Lie algebra homomorphism $P':\mathfrak g\mathfrak l(n)\to \mathbb R$ 7 down which must vanish on each commutator. The space of all commutators is the codimension 1 Lie subalgebra $\mathfrak s\mathfrak l(n)$. Since $P'$ is also linear, it is of the form $P'(X) = k.\ vote operatorname\{Trace\}(X)$ for some $k$. This integrates to $P(A) = \det(A)^k$. Here $k$ must be integral if the ground field is $\mathbb C$. In the real case any $k$ works if $\det(A)$ is always $\ge 0$, and integral generally. add comment This is going to sound like massive overkill, but it is "very well known" that the only 1-dimensional polynomial representations of $GL(V)$ (which is what you're looking at) are the nonnegative powers of $\mathrm{det}$. up vote 6 Reference (I assume from the mention of statistics that you are OK working with base field $\mathbf{R}$ or $\mathbf{C}$): e.g. Procesi on p.278 of Lie Groups lists all irreducible rational down vote representations as all $$ S_\lambda(V)\otimes\mathrm{det}^k,\qquad k\in\mathbf{Z}, $$ where $\lambda$ runs over a certain set of partitions or Young tableaux; and on p.270 he gives a dimension formula for $S_\lambda(V)$ which is $>1$ unless $S_\lambda(V)$ is trivial. add comment Not the answer you're looking for? Browse other questions tagged reference-request linear-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/98997/determinants-and-polynomials-in-matrices/98998","timestamp":"2014-04-21T04:57:28Z","content_type":null,"content_length":"68525","record_id":"<urn:uuid:52f74364-6bb5-4616-83c4-70871222070a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
compute the square of a number The Java Programming Forums are a community of Java programmers from all around the World. Our members have a wide range of skills and they all have one thing in common: A passion to learn and code Java. We invite beginner Java programmers right through to Java professionals to post here and share your knowledge. Become a part of the community, help others, expand your knowledge of Java and enjoy talking with like minded people. Registration is quick and best of all free. We look forward to meeting you. Members have full access to the forums. Advertisements are removed for registered users. hello plz help me how to solve this question in java programming write a method (and the calling statement) to compute the square of a number To compute square number you only need the number squared - i.e. multiplied by itself. Are you sure it is your problem? Perhaps you meant "Square Root"? There is a ready method Math.sqrt(x)... If you want to calculate it without library methods you can use either "Babylonyan method" - you can find comprehensive description here: Wiki: Square Root Approximation - CodeAbbey Or you can even use binary search. Please show some effort, then describe what you need help with, what you don't understand, errors received, etc. THEN we can help.
{"url":"http://www.javaprogrammingforums.com/whats-wrong-my-code/33350-compute-square-number.html","timestamp":"2014-04-16T08:41:01Z","content_type":null,"content_length":"50701","record_id":"<urn:uuid:09c609c9-f398-4c8d-9411-433ffa0120fc>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Slant submanifolds of Kähler product manifolds. (English) Zbl 1126.53018 In the paper slant submanifolds of a Kähler product manifold are studied. The submanifold of a Kähler manifold is said to be slant if for each non zero vector $X\in {T}_{p}M$ the angle $\theta \left(X\right)$ is independent of the choice of $p\in M$ $X\in {T}_{p}M$ . Consider a Kähler product manifold . Denote by the projection operators of the tangent space of to the tangent spaces of , respectively, and put . It is proved that an -invariant, slant submanifold of a Kähler product manifold is a product manifold ) is also a slant submanifold of ). It is also obtained that if is the Kähler slant submanifold of ) is a Kähler slant submanifold of ). In the last section several inequalities on scalar curvature and Ricci tensor for slant, invariant and anti-invariant submanifolds of a Kähler product manifold are obtained. 53C15 Differential geometric structures on manifolds 53C40 Global submanifolds (differential geometry)
{"url":"http://zbmath.org/?q=an:1126.53018","timestamp":"2014-04-18T18:26:35Z","content_type":null,"content_length":"24871","record_id":"<urn:uuid:fde4c0f5-8706-4750-9471-2acf56a527a5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
Almost surely From Wikipedia, the free encyclopedia "Probability 1" redirects here. For Rudolf Carnap's notion of "probability ", see In probability theory, one says that an event happens almost surely (sometimes abbreviated as a.s.) if it happens with probability one.^1 The concept is analogous to the concept of "almost everywhere " in measure theory. While in many basic probability experiments there is no difference between almost surely and surely (that is, entirely certain to happen), the distinction is important in more complex cases relating to some sort of infinity. For instance, the term is often encountered in questions that involve infinite time, regularity properties or infinite-dimensional spaces such as function spaces. Basic examples of use include the law of large numbers (strong form) or continuity of Brownian paths. The terms almost certainly (a.c.) and almost always (a.a.) are also used. Almost never describes the opposite of almost surely; an event which happens with probability zero happens almost never.^2 Formal definition Let $(\Omega,\mathcal{F},P)$ be a probability space. An event $E \in \mathcal{F}$ happens almost surely if $P[E]=1$. Equivalently, $E$ happens almost surely if the probability of $E$ not occurring is zero: $P[E^C] = 0$. More generally, any event $E$ (not necessarily in $\mathcal{F}$) happens almost surely if $E^C$ is contained in a null set: a subset of some $N\in\mathcal F$ such that $P[N]=0$.^3 The notion of almost sureness depends on the probability measure $P$. If it is necessary to emphasize this dependence, it is customary to say that the event $E$ occurs $P$-almost surely or almost surely $[P]$. "Almost sure" versus "sure" The difference between an event being almost sure and sure is the same as the subtle difference between something happening with probability 1 and happening always. If an event is sure, then it will always happen, and no outcome not in this event can possibly occur. If an event is almost sure, then outcomes not in this event are theoretically possible; however, the probability of such an outcome occurring is smaller than any fixed positive probability, and therefore must be 0. Thus, one cannot definitively say that these outcomes will never occur, but can for most purposes assume this to be true. Throwing a dart For example, imagine throwing a dart at a unit square wherein the dart will impact exactly one point, and imagine that this square is the only thing in the universe besides the dart and the thrower. There is physically nowhere else for the dart to land. Then, the event that "the dart hits the square" is a sure event. No other alternative is imaginable. Next, consider the event that "the dart hits the diagonal of the unit square exactly". The probability that the dart lands on any subregion of the square is proportional to the area of that subregion. But, since the area of the diagonal of the square is zero, the probability that the dart lands exactly on the diagonal is zero. So, the dart will almost never land on the diagonal (i.e. it will almost surely not land on the diagonal). Nonetheless the set of points on the diagonal is not empty and a point on the diagonal is no less possible than any other point, therefore theoretically it is possible that the dart actually hits the diagonal. The same may be said of any point on the square. Any such point P will contain zero area and so will have zero probability of being hit by the dart. However, the dart clearly must hit the square somewhere. Therefore, in this case, it is not only possible or imaginable that an event with zero probability will occur; one must occur. Thus, we would not want to say we were certain that a given event would not occur, but rather almost certain. Tossing a coin Consider the case where a coin is tossed. A coin has two sides, heads and tails, and therefore the event that "heads or tails is flipped" is a sure event. There can be no other result from such a Now consider the single "coin toss" probability space $(\{H,T\}, 2^{\{H, T\}}, \mathbb{P})$, where the event $\{\omega = H\}$ occurs if heads is flipped, and $\{\omega=T\}$ if tails. For this particular coin, assume the probability of flipping heads is $\mathbb{P}[\omega = H] = p\in (0, 1)$ from which it follows that the complement event, flipping tails, has $\mathbb{P}[\omega = T] = 1 - Suppose we were to conduct an experiment where the coin is tossed repeatedly, and it is assumed each flip's outcome is independent of all the others. That is, they are i.i.d.. Define the sequence of random variables on the coin toss space, $\{X_i(\omega)\}_{i\in\mathbb{N}}$ where $X_i(\omega)=\omega_i$. i.e. each $X_i$ records the outcome of the $i$'th flip. The event that every flip results in heads, yielding the sequence $\{H, H, H, \dots\}$, ad infinitum, is possible in some sense (it does not violate any physical or mathematical laws to suppose that tails never appears), but it is very, very improbable. In fact, the probability of tails never being flipped in an infinite series is zero. To see why, note that the i.i.d. assumption implies that the probability of flipping all heads over $n$ flips is simply $\mathbb{P}[X_i = H, \ i=1,2,\dots,n]=\left(\mathbb{P}[X_1 = H]\right)^n = p^n$. Letting $n\rightarrow\infty$ yields zero, since $p\in (0,1)$ by assumption. Note that the result is the same no matter how much we bias the coin towards heads, so long as we constrain $p$ to be greater than 0, and less than 1. Thus, though we cannot definitely say tails will be flipped at least once, we can say there will almost surely be at least one tails in an infinite sequence of flips. (Note that given the statements made in this paragraph, any predefined infinitely long ordering, such as the digits of pi in base two with heads representing 1 and tails representing 0, would have zero-probability in an infinite series. This makes sense because there are an infinite number of total possibilities and $\scriptstyle \lim\limits_{n\to\infty}\frac{1}{n} = 0$.) However, if instead of an infinite number of flips we stop flipping after some finite time, say a million flips, then the all-heads sequence has non-zero probability. The all-heads sequence has probability $p^{1,000,000}eq 0$, while the probability of getting at least one tails is $1 - p^{1,000,000}$ and the event is no longer almost sure. Asymptotically almost surely In asymptotic analysis, one says that a property holds asymptotically almost surely (a.a.s.) if, over a sequence of sets, the probability converges to 1. For instance, a large number is asymptotically almost surely composite, by the prime number theorem; and in random graph theory, the statement "G(n,p[n]) is connected" (where G(n,p) denotes the graphs on n vertices with edge probability p) is true a.a.s. when p[n] > $\tfrac{(1+\epsilon) \ln n}{n}$ for any ε > 0.^4 In number theory this is referred to as "almost all", as in "almost all numbers are composite". Similarly, in graph theory, this is sometimes referred to as "almost surely".^5 See also • Rogers, L. C. G.; Williams, David (2000). Diffusions, Markov Processes, and Martingales 1. Cambridge University Press. • Williams, David (1991). Probability with Martingales. Cambridge University Press.
{"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Almost_surely","timestamp":"2014-04-18T16:20:56Z","content_type":null,"content_length":"84726","record_id":"<urn:uuid:c9cc2794-3ee9-47c2-a200-4e50a4f1ff56>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
Search Results Search Results: Mathematics 2013 College of Science Student AwardsPosted: Thursday, April 4 2013 The College of Science has named Katie Slavens as the recipient of the John P. George Award for 2013. The George Award is given to the outstanding graduating senior from the college, chosen from among students nominated from each of the college’s departments. Katie will receive a certificate and a 50 oz. engraved silver bar, to be presented at the college’s graduation reception in the ... » Read More Dissertation Defense: Jodi FrostFriday, July 6 2012 at 2:30 PM Doctoral candidate in Mathematics, Jodi Frost, will defend her dissertation entitled, "Pre-Service Teachers' Conceptions of Literal Symbols" on July 6, in TLC 141 at 2:30 p.m. The event is free and open to the public. » Read More Mathematics Colloquium: Fernando Guevara Vasquez (University of Utah)Thursday, May 3 2012 at 3:30 PM Mathematics Colloquium, featuring Fernando Guevara Vasquez (University of Utah), titled, "Active exterior cloaking for the Helmholtz equation," will occur May 3 at 3:30 p.m. in TLC 032. ABSTRACT: We present a way of using active sources to hide objects from a known incident field. The active sources cancel out the incident field in a region while having a small far field. Since very little waves » Read More Mathematics Colloquium: Yulia Hristova (University of Minnesota)Thursday, April 26 2012 at 3:30 PM Mathematics Colloquium, presented by Yulia Hristova (University of Minnesota) tittled "New problems in emission tomography," will occur April 26 at 3:30 p.m. in TLC 032. ABSTRACT: Computerized tomography (CT) is the name of a class of non-invasive imaging techniques in which the interior structure of an object is computed from external measurements. In order to recover an image of the interior on... » Read More Mathematics Colloquium: Jodi Mead (Boise State University)Thursday, April 19 2012 at 3:30 PM Mathematics Colloquium, with Jodi Mead (Boise State University), titled "Inverse Problems and Uncertainty Quantification," will occur April 19 at 3:30 p.m. in TLC 032. ABSTRACT: Combining physical or mathematical models with observational data often results in an ill-posed inverse problem. Regularization is typically used to solve ill-posed problems, and it can be viewed as adding statistical » Read More Mathematics Colloquium: Bahman Shafii (University of Idaho)Thursday, April 12 2012 at 3:30 PM Mathematics Colloquium, with Bahman Shafii (University of Idaho), titled "Using Maximum Entropy and Bayesian Analysis to Develop Non-parametric Probability Distributions for the Mean and Variance," will occur April 12 at 3:30 p.m. in TLC 032. ABSTRACT: Estimation of moments such as the mean and variance of populations is generally carried out through sample estimates. Given normality of the » Read More Mathematics Colloquium: Paul HohenloheThursday, April 5 2012 at 3:30 PM Mathematics Colloquium, presented by Paul Hohenlohe, University of Idaho, titled "The Dimensionality of Evolution" will occur April 5 at 3:30 p.m.in TLC 032. ABSTRACT: The dynamic behavior of evolving biological systems involves a vast number of factors, including environmental variables, complex interactions among species and numerous organismal traits such as morphology, physiology and » Read More Mathematics Colloquium: Enrico Au-Yeung (University of British Columbia)Thursday, March 29 2012 at 3:30 PM Mathematics Colloquium, presented by Enrico Au-Yeung, University of British Columbia, titled, "Quasicrystals and Impossible Symmetry: three streams of mathematics converging," will occur March 29 at 3:30 p.m. in TLC 032. ABSTRACT: The recent Nobel Prize in chemistry was awarded to Dan Shechtman for his discovery of Quasicrystals in 1982. But long before 1982, mathematicians were playing a... » Read More Computer Science ColloquiumTuesday, March 6 2012 at 3:30 PM Computer Science Colloquium: "How Can Problem Structure Analysis from Evolutionary Computation be Applied to Biological Problems?" will occur March 6 at 3:30 p.m. in EP 122. In Biology, understanding the interaction of genes with other genes and their environment is critical to understanding how organisms function and evolve. In Computer Science, understanding the "structure" of a problem » Read More Mathematics Colloquium: Hong WangThursday, March 1 2012 at 3:30 PM Mathematics Colloquium: Hong Wang (University of Idaho) titled "Turán Number, Ramsey Number and The Regularity Lemma" will occur March 1 at 3:30 p.m. in TLC 032. ABSTRACT: One of the powerful tools in combinatorics is the regularity lemma of Szemerédi. This talk is to introduce this powerful regularity lemma and its related blow-up methods. We will begin with some basic definitions and then » Read More
{"url":"https://www.uidaho.edu/newsevents/results?tag=Mathematics&more=1","timestamp":"2014-04-20T06:26:42Z","content_type":null,"content_length":"51506","record_id":"<urn:uuid:0113c23d-0c05-40b2-9977-255e75a86d6a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
integration using trigonmetric functions October 23rd 2008, 09:58 PM integration using trigonmetric functions integrate question: $<br /> <br /> = \int (3sin(4x+5) - 6cos(7-8x)) dx<br /> <br />$ any help, im thinking it has to do with identities? October 23rd 2008, 10:08 PM It looks kind of evil, but it's actually not that difficult. You can split it up into two different integrals. $3\int (sin(4x+5))dx - 6\int (cos(7-8x))dx$ Now use u-subs, taking the 4x+5 and the 7-8x to be the u's for the two different integrals. I think it yields: $(-3/4)cos(4x+5) +(3/4)sin(7-8x) + C$ October 23rd 2008, 10:48 PM It looks kind of evil, but it's actually not that difficult. You can split it up into two different integrals. $3\int (sin(4x+5))dx - 6\int (cos(7-8x))dx$ Now use u-subs, taking the 4x+5 and the 7-8x to be the u's for the two different integrals. I think it yields: $(-3/4)cos(4x+5) +(3/4)sin(7-8x) + C$ shouldnt it be $(-3/4)cos(4x+5)-(3/4)sin(7-8x) + C$ ? if theres a minus there already and u have a plus from the sin coming in for the cos? October 26th 2008, 03:12 PM But don't forget, the coefficient for x in the sin function is -8, making du = -8 and thus causing the two minuses (from the original equation and then from the integration) to cancel out, yielding a positive.
{"url":"http://mathhelpforum.com/calculus/55450-integration-using-trigonmetric-functions-print.html","timestamp":"2014-04-16T05:25:42Z","content_type":null,"content_length":"6586","record_id":"<urn:uuid:8260f842-29a3-4614-90dd-7d93683b88a9>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
Bonita, CA Algebra Tutor Find a Bonita, CA Algebra Tutor ...I have tutored all math subjects from basic arithmetic past advanced calculus and differential equations since 2009. I love helping students find their own learning style, and giving them the tools to learn the subject on their own without me! I also love tutoring computer science and physics. 37 Subjects: including algebra 2, algebra 1, geometry, calculus ...I also spent one year tutoring in Seattle, WA, working with special needs students pursuing their GEDs. I am an effective tutor because of my skill in assessing my student's needs, but also because of my ability to empathize with young learners. I believe there is always an unseen angle that each learner can use to make the subject material more accessible and interesting. 14 Subjects: including algebra 2, algebra 1, French, geometry ...I can make math be your best friend!I am a native Spanish speaker with excellent proficiency in the language. I also wrote many official documents in Spanish for my employer for many years. I have the patience and knowledge of the language to work with students of all levels on their pronunciat... 21 Subjects: including algebra 1, economics, SAT math, statistics ...I encourage feedback, because I really care on the student's opinion. I do not charge for a lesson if the student is not satisfied with the tutoring. I am very flexible with time, and I am happy to travel to a location that is convenient for the student. 8 Subjects: including algebra 1, Spanish, chemistry, physiology ...As an undergraduate student, I also tutored entry-level and high school chemistry and math for 3 years. Even when I was middle and high school age, my siblings, cousins and I would play "school" at our house in the summer. I was always the teacher and routinely helped and tutored the younger ones in their elementary school subjects including math drills, reading and writing. 9 Subjects: including algebra 1, English, chemistry, reading Related Bonita, CA Tutors Bonita, CA Accounting Tutors Bonita, CA ACT Tutors Bonita, CA Algebra Tutors Bonita, CA Algebra 2 Tutors Bonita, CA Calculus Tutors Bonita, CA Geometry Tutors Bonita, CA Math Tutors Bonita, CA Prealgebra Tutors Bonita, CA Precalculus Tutors Bonita, CA SAT Tutors Bonita, CA SAT Math Tutors Bonita, CA Science Tutors Bonita, CA Statistics Tutors Bonita, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/bonita_ca_algebra_tutors.php","timestamp":"2014-04-17T07:51:02Z","content_type":null,"content_length":"24033","record_id":"<urn:uuid:d6507e3a-0ca6-408b-9b5b-acb37c6b2bc1>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Erik Demaine: Computational Origami Friday math movie – Erik Demaine: Computational Origami [26 Jun 2009] This week’s movie is a talk by MIT professor Erik Demaine. Here’s what SeedMagazine says: As a glassblower, Tetris master, magician, and mathematician, the MIT professor has spent his life exploring the mysterious and fascinating relationships between art and geometry. Here, he discusses the potential of lasers, leopard spots, and computer science to breathe new life into everything from architecture to origami. Demaine has a “hard time distinguishing art from mathematics”. His approach to art has a strong emphasis on collaboration, which as he says, is a rare thing in art. Demaine is a professor in computer science and mathematics. He realized that “mathematics (itself) is an art form”. During the talk, he mentions Escher’s study of mathematics. [Unfortunately, this video is no longer available.] [Thanks to Maria at Natural Math for the movie link.]
{"url":"http://www.intmath.com/blog/friday-math-movie-erik-demaine-computational-origami/2588","timestamp":"2014-04-17T18:24:13Z","content_type":null,"content_length":"24610","record_id":"<urn:uuid:48a6059e-2787-467f-bc7c-b139995699d9>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
Factoring Polynomials of Degree 2 Date: 4/8/96 at 13:39:19 From: Lorrane Chung Subject: Factoring polynomials I am having a lot of difficulties factoring polynomials of the type x^2 + 6x +8 and 3x^2 + 10x +8. I have exhausted all the methods - they don't seem to work for me. I really need an easy method to factor other than the quadratic formula or for finding 2 factors that multiply to give last term but add to give the middle term. Please help. Date: 4/16/96 at 19:33:19 From: Doctor Patrick Subject: Re: Factoring polynomials Hi Lorrane! I hope we can help you. How did you try factoring these problems? When I have to factor, I figure out the possible factors for the first and last terms and then play with them a little to see what might work, and what is definitely going to be out of the question. In the first problem, all we need to factor is the 8 - the x^2 is going to have to be x*x. This gives us two choices: 8*1, or 2*4. Since all of the numbers in the problem are positive, we can ignore the other two possibilities of -8*-1 and -2*-4. Working with the 8*1 first, we get (x+8)*(x+1), which doesn't give us the middle term we are looking for. Since there are no other combinations using 8*1 (do you see why?) we can move on to the other possibility of 2*4. If we try factoring the equation with these numbers we get (x+2)(x+4). When we multiply this back out we get x*x +2x + 4x +8 which equals x^2+6x+8 after we combine like terms and multiply out the x*x. The second problem is more complicated since we have one additional term (the 3 in front of the x^2). Again, we have two ways to factor 8, but we also need to try multiplying the 3 by both of the factors to find out which combination works. If we start with 8*1 as factors of 8, again we get two possibilities: (3x+8)(x+1) or (3x+1)(x*8). Do you understand why there are two possibilities this time? The first possibility would end up giving us (3x*1) + (x*8), or 3x+8X, for the middle term. Since this would equal 11x it is not the combination we are looking for. Likewise (3x+1)(x*8) will not work. (Why not?) So now have to move on to the other set of factors (4*2). Why don't you try the remaining two possibilities on your own using the same methods we used above. Good luck! Write us back if you need more help! -Doctor Patrick, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/58545.html","timestamp":"2014-04-16T23:01:50Z","content_type":null,"content_length":"7265","record_id":"<urn:uuid:983bbb52-0340-4eb6-9f39-1946b6e90314>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to PyEC PyEC provides an implementation of several well-known optimization methods with a particular focus on methods of evolutionary computation or natural computation. If you are just interested in optimizing functions, see the section on Basic Usage below. For the time being, this code is in an experimental stage and will likely change substantially with the next few minor versions. Who is PyEC for? PyEC is intended for three main groups of people: Anyone who needs to optimize functions. PyEC can be used as a drop-in module to perform optimization quickly and accurately. The most common optimization methods have been packaged in an easily accessible form for this purpose, as described in Basic Usage below. Researchers in Evolutionary Computation. PyEC contains tools to construct, modify, and parameterize optimization methods. It can be used as a toolbox for the researcher in evolutionary computation to construct evolutionary algorithms on complex search domains, or to test new genetic operators or new combinations of existing evolutionary components. These features are not well documented yet, but the class-specific Documentation should help you get started. Anyone interested in Stochastic Optimization. PyEC efficiently implements the most common methods in stochastic optimization. It can be used to test and experiment how different algorithms work, and what effects different parameter settings can have. Basic Usage PyEC provides various optimization methods. If you just want to optimize, the following examples should help. PyEC provides easily accessible optimization routines for Evolutionary Annealing, Differential Evolution, Nelder-Mead, Generating Set Search, CMA-ES, Particle Swarm Optimization, and Simulated Annealing. Suppose we start a Python terminal session as follows: Python 2.6.5 (r265:79359, Mar 24 2010, 01:32:55) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import pyec.optimize >>> from numpy import * >>> def branin(x): ... return (x[1] - (5.1/(4*(pi**2)))*(x[0]**2) + (5./pi)*x[0] - 6.)**2 + 10 * (1. - 1./(8.*pi))*cos(x[0])+10. To start with, we are going to optimize Branin’s function. This is a real function in two dimensions. It is defined as \[f(x,y) = \left(y-\frac{5.1}{4\pi^2}x^2+\frac{5}{\pi}x-6\right)^2 + 10\left(1-\frac{1}{8\pi}\right)\cos(x) + 10\] and it has three global optima with \(f(-\pi, 12.275) = f(\pi, 2.275) = f(9.42478, 2.475) = 0.39788735\). Usually, the optimization is constrained by \(-5 < x < 10\) and \(0 < y < 15\). Here are some examples of PyEC on Branin’s function with no constraints, on about 2,500 function evaluations (default): >>> pyec.optimize.evolutionary_annealing(branin, dimension=2) (array([ 3.14159266, 2.27500002]), 0.39788735772973816) >>> pyec.optimize.differential_evolution(branin, dimension=2) (array([ 3.14100091, 2.25785439]), 0.39819904998361366) >>> pyec.optimize.cmaes(branin, dimension=2) (array([ 3.14159266, 2.27500001]), 0.39788735772973816) >>> pyec.optimize.nelder_mead(branin, dimension=2) (array([ 3.14159308, 2.27499873]), 0.39788735773149675) >>> pyec.optimize.generating_set_search(branin, dimension=2) (array([ 3.12552433, 2.29660443]), 0.39920864246465015) >>> pyec.optimize.particle_swarm_optimization(branin, dimension=2) (array([ 3.1282098 , 2.26735578]), 0.39907497646047574) In these examples, all we had to do was to pass our function to one of PyEC’s optimizers along with the dimension (Branin’s has 2 inputs variables), and these methods used default configurations in order to locate the global minimum (evolutionary_annealing() requires the space_scale parameter for unconstrained optimization). The return values are a tuple with two items. The first element is the best solution found so far, and the second element is the function value at that solution. In this case, many of the optimizers were not particularly accurate. Some methods missed the global minimum by an error on the order of 0.01, which is much larger than we would prefer, especially for this simple problem. Most of these methods can be made more accurate by replacing some of the default >>> pyec.optimize.differential_evolution(branin, dimension=2, generations=250, population=50, CR=.2, F=.2) (array([ 3.14159085, 2.27498739]), 0.39788735794177654) >>> pyec.optimize.generating_set_search(branin, dimension=2, generations=5000, expansion_factor=1., initial_step=0.5) (array([ 3.1386125 , 2.27810758]), 0.3979306095569175) >>> pyec.optimize.particle_swarm_optimization(branin, dimension=2, omega=-0.5, phi_p=0.0) (array([ 3.14159258, 2.27500003]), 0.39788735772976125) Now all but one of the methods have found the global optimum accurately up to eight decimal places. If we had wished to find the maximum values rather than the minimum values of the function, we could have passed the parameter minimize=False to any of these optimizers: >>> pyec.optimize.differential_evolution(branin, dimension=2, minimize=False) In general, a function \(f(x)\) can be maximized by minimizing the alternate function \(-f(x)\) instead, which is what PyEC does internally when minimize=False. Branin’s function is relatively easy to optimize; we would like to try a harder function. PyEC ships with several benchmark multimodal functions, many of which are defined in higher dimensions as well. These benchmarks can be referenced by name when calling PyEC’s optimizers. One example is Rastrigin’s function (see <http://en.wikipedia.org/wiki/Rastrigin_function>): def rastrigin(x): return 10 * len(x) + ((x ** 2) - 10 * cos(2 * pi * x)).sum() Rastrigin’s has a minimum at the origin, where its function value is 0. PyEC’s optimizers can find this minimum with a little tweaking: >>> pyec.optimize.differential_evolution("rastrigin", dimension=10, generations=2500, population=100, CR=.2, F=.2) (array([ -3.09226981e-05, 2.19169568e-05, 4.46486498e-06, -1.50452001e-05, 6.03987807e-05, -6.17905562e-06, 4.82476074e-05, -1.02580314e-05, -2.07212921e-05, 9.15748483e-06]), 1.6496982624403245e-06) PyEC’s optimizers are stochastic; that is, they search the space randomly. No two runs of any optimizer are the same. You can get widely varying results from different runs of the same algorithm, so it’s best to run an algorithm a few times if you’re not satisfied with the results: >>> pyec.optimize.cmaes("rastrigin", dimension=10, generations=2500,population=250) (array([ 1.48258949e-03, 2.25335429e-04, -5.35427662e-04, -2.74244483e-03, 3.20044246e-03, -4.59549462e-03, -2.09654701e-03, 9.93491865e-01, 8.95951435e-04, -9.95219709e-01]), 1.9996060669315057) >>> pyec.optimize.cmaes("rastrigin", dimension=10, generations=2500,population=250) (array([ -4.26366209e-04, -7.29513508e-04, 5.97365406e-04, -9.93842635e-01, 4.47482962e-04, -3.32484925e-03, -3.98886672e-03, 4.06692711e-04, -1.49134732e-03, 3.80257643e-03]), 1.0041502969750979) >>> pyec.optimize.cmaes("rastrigin", dimension=10, generations=2500,population=250) (array([ -4.24651080e-04, 7.78200373e-04, 4.80037528e-03, 1.06188871e-03, 2.50392639e-04, -3.00255770e-03, -9.91998151e-01, 5.52063421e-03, -3.44827888e-03, -9.97582491e-01]), 2.0081777356006967) Every optimizer performs best on some set of problems, and performs worse on others. Here, differential_evolution() performs well on rastrigin, whereas CMA-ES is less reliable on this function. The opposite situation can hold true for other optimization problems.
{"url":"http://www.alockett.com/pyec/docs/0.2/index.html","timestamp":"2014-04-18T02:58:06Z","content_type":null,"content_length":"152221","record_id":"<urn:uuid:9dd6aba3-d297-4886-825c-196067e9320e>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Root 2 recursion proof help August 26th 2009, 05:23 AM Root 2 recursion proof help f(n) is defined by setting f(1) = 2 and f(n) = 0.5( f(n-1)+ 2/(f(n-1))) I need to prove that f(n)^2 will always be bigger than 2, I tried doing this by finding and expression for f(n)^2 and then differentiating with respect to f(n-1) which gave me a minimum point of 2 for f(n)^2 however this implies that f(n)^2 can equal 2 or more, not just more than 2 that was required. Can anybody tell me where I'm going wrong, thanks! August 26th 2009, 05:51 AM If x^2 <> 2 then (0.5(x+2/x))^2 - 2 = 0.25 *(x^2 + 4 +4/x^2) - 0.25*8 = 0.25*(x-2/x)^2 > 0 August 26th 2009, 10:58 AM Thanks, concerning the last statement in the proof if x were to equal root 2 this wouldn't be satisfied. I think my teacher wants me to prove the function converges to root 2 be never equals it, I'm not sure though. August 26th 2009, 03:57 PM My proof is OK because it starts off with the assumption that x is not a square root of 2. So I have proved that if x^2<> 2 it stays that way and that all iterations apart from possibly the starting value are greater than the square root of 2, so I did prove what you asked in the first place. But you are right that I haven't proved that the value converges to the square root of 2 yet.To do that I'd need to establish an upper bound for f(n) such that f(n)^2-2 tended to 0. I'm sure this can't be difficult. The easiest way would probably be to show that (f(n)^2-2)/(f(n+1)^2-2) was less than some k with k < 1. Then f(n) would have to tend to a square root of 2. August 27th 2009, 02:31 AM My epsilon delta proof skills are rusty, but here are some pointers that should let you prove this properly. f(n+1)/f(n) = 0.5(x+2/x)/x = 0.5 + 1/x^2. So for f(1)=2 the sequence is decreasing and bounded below by the square root of 2 so must converge to something. And f(n+1)/f(n) is strictly less than 1 except at x = square root 2 which surely means that no greater value can be a lower bound. In fact if f(n)^2 = 2 + e then f(n+1)^2 = 0.25(2+e) + 1 + 1/(2+e) = 1.5 + 0.25e + 0.5(1+e/2)^-1 which for small e = = 1.5 + 0.25e + 0.5 (1-e/2 + e^2/4 - e^3/8...) which is approximately 2 + e^2/8 which shows the convergence to square root of 2 is quadratic.
{"url":"http://mathhelpforum.com/discrete-math/99278-root-2-recursion-proof-help-print.html","timestamp":"2014-04-17T19:07:14Z","content_type":null,"content_length":"6740","record_id":"<urn:uuid:40e56fac-6cd0-4be4-8c64-860c54645ff8>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
November 21, 2000 California State Curriculum Commission and California State Board of Education California Department of Education 721 Capitol Mall Sacramento, CA 94244 Dear Commissioners and Board Members, As former members of CRP panel # 3, we are writing to the Commission to provide further clarification of our report on the algebra text, Algebra 1, Concepts and Skills, published by McDougal Littell. While we realize that this letter has no official status, we believe that the treatment of this text has been subject to certain errors and misunderstandings that, if not corrected, would effectively circumvent the role of the CRP. In short, the opinion of the CRP was, and still is, that this text, even including the supplementary material in the "California Standards Key Concepts Book", fails to cover adequately all the California Standards for Algebra and hence should not be recommended for adoption. The IMAP assigned to this text (IMAP # 6) chose to disagree with this recommendation. Although they provided no specific examples of how they believed the relevant standards were, in fact, covered, they seem to have made certain implicit assumptions about the CRP report in choosing to disregard it. We believe that, if these are accepted by the Commission and the State Board, the focus will be diverted from the real issue which is the inadequate coverage of the Standards by this text. There are two issues that we believe are relevant. First of all, between the submission of the CRP's final report and the IMAP's report, the publisher, contrary to explicit rules for the meeting, handed out an extensive rebuttal to a preliminary draft of the CRP report. Because most of the CRP members had left before an analysis of the publisher's material was possible, there was no opportunity to address the statements made in that material. Although this material was then withdrawn from the IMAP members the following day, it clearly had been read by several of them and appeared to have influenced their opinions. The second and more central issue concerns the supplementary text, "California Standards Key Concepts Book" (CSKC). The IMAP report, echoing the publisher materials referred to above, states that the "CRP did not review the California Standards Key Concepts Book....", implying that this justifies their opinion that all the Standards are adequately covered. The CSKC was, in fact, read carefully by the primary author of the preliminary report on this text. This did not alter the opinion that certain standards were not met. Furthermore, because there is no reference to this text in either the Student or Teacher Edition or any indication of how it was to be used, the entire CRP decided that it was not appropriate to use it for analysis of the standards. We feel it is important to explain this decision. All of the CRP's were sent only the Student Text and the Teacher's Edition for their content reviews and told to restrict their attention to these texts in their content analyses. The reason for this policy, as was clearly and explicitly stated by Professor Wu at the opening of the summer CRP/IMAP training session, was that it is essential that coverage of all standards be contained in the Student Text, that is, in material available to all students and to any family members, such as parents, guardians, or siblings, who might wish to help the students. Materials available only in class or only to the teacher are not relevant to this analysis. Furthermore, coverage of standards must be done in a logical and coherent manner. This does not necessarily mean that it must be contained in a single, bound volume; however, it cannot be contained in a hodge-podge of materials whose inter-relationship is unclear. Finally, the Teacher's Editions were sent to the CRP's so that they could better understand how the Student Texts were to be used. The CSKC text, which was sent to the CRP members as part of the original submission, contains a review of pre-grade level material (pages P1-P104), a discussion of further topics (T1-T25), and some brief sections on selected Standards (S1-S95). Only these last sections are relevant to the question of standards coverage. The sections discussing the standards that the CRP felt were not adequately covered are extremely short, mostly 4 pages long. The most important point here is that there is no connection between this text and the main textbook. Not only are they not co-ordinated in any fashion whatsoever, but the CSKC is not even mentioned in either the Student or Teacher Edition. In particular, there are no instructions at all to the teachers on how to use this text. We emphasize that this is not a question of some standards being covered in one text, others in the other text. Instead, single standards are somehow supposed to be covered by using a couple of pages from one text and a couple from the other. The publisher has now agreed to make the CSKC part of its standard Student package, but this does not address the primary issue of use of the text. They have agreed to put page references to the CSKC in the primary text. However, this still does not give any information on how the supplementary text is to be used. Furthermore, although putting in page references could be considered an edit, claiming that this somehow provides adequate coverage of the standards is something that would have to be checked by a mathematically trained panel; this is clearly beyond the correction/edit Given that there seems to be some confusion concerning the validity of considering the CSKC text as part of the Student text, we feel that it is important to provide the Commission with further details of our original analysis and explain how, EVEN WITH THE CSKC text, several of the Standards are not adequately met. Standard 3.0 is NOT COVERED. This Standard concerns absolute value, in particular, equations and inequalities involving absolute value. This standard is not discussed at all in the CSKC text so whether or not that text is considered is irrelevant in this case. In order to cover the topic of absolute value, one must understand "and" and "or" statements. The discussion of these topics is inadequate. They are never related to the concepts of intersection and union; students are simply told that "and" corresponds to one type of inequality and "or" to another. Furthermore, the transition from equations involving absolute value to inequalities involving absolute value is utterly confused. In section 6.6, students are told that a solution to | ax +b | = c is a solution to either ax +b = c OR ax + b = -c. In the very next section, 6.7, they are told that a solution to | ax + b | < c is a solution to ax + b < c AND ax + b > -c. A solution to | ax + b | > c is a solution to ax + b < c OR ax + b > -c. NO EXPLANATION whatsoever is given for these statements. Any reasonable student would wonder where the "and" came from and why only in one of the inequalities. A student might also notice that, if "c " is not required to be positive, (and the text does not require this in the inequality case), then the statements about the inequalities are incorrect. This is a totally misleading and completely unacceptable treatment of this standard. Standard 18.0 is NOT ADEQUATELY COVERED This standard is concerned with the concept of a relation and how to decide whether or not a relation is a function. Students are supposed to be able to justify their answer. The problem with the discussion of this standard is similar to that of many topics in this text. Students are repeatedly given rules to apply or formulas to use with absolutely no explanation or justification. In this case, the students are told that, for a relation to be a function, its graph must pass the "Vertical Line Test": each vertical line must intersect the graph in at most 1 point. Again, no justification or explanation is given for this fact. Nor is it stated that the input is assumed to be the x-coordinate and the output the y-coordinate, without which the statement is The discussion in the CSKC text is essentially identical to that in the main text. There is no further explanation or anything else additional that is not in the main text. Standards 14.0 and 19.0 are NOT ADEQUATELY COVERED These standards deal with the quadratic formula and completing the square. The text provides the quadratic formula in section 9.6 with no explanation. As noted in the discussion of the previous Standards, this is a problem that is pervasive in the text; procedures and formulae are given to the students with no justification; they are to be used without the students knowing where they come The quadratic formula is eventually derived, but not until almost 200 pages later and in a totally unmotivated fashion in the middle of a chapter consisting of largely unrelated topics. There is no reason given for why one would want to complete the square; there is only one simple example provided before the derivation of the quadratic formula is given. There is no need to delay this central topic in algebra until the very end of the text because the intervening material is essentially unused. In particular, the form of a perfect square quadratic polynomial is barely mentioned in the 2 pages in which completing the square is presented and the quadratic formula is proved. The text simply states that, "By using FOIL to expand ..., you can show that this pattern holds for any real number b." The text itself does not show this fact nor does it even refer to the section, a hundred pages before, where FOIL is explained. The CSKC has 4 pages on completing the square in which it provides a few, slightly more complicated, examples. However, it still doesn't provide any justification for the completing the square formula; it doesn't even mention FOIL. Even using the material in both the main text and the CSKC, which are not even vaguely co-ordinated, the connection between the process of completing the square in the specific examples and what is done in a general form in the proof of the quadratic formula is never made. We want to emphasize that our objection to the presentation of the quadratic formula and completing the square is not that it is done in a non-standard order. It is that it is done in a manner that is antithetical to the goal teaching students logical reasoning. A formula is provided and used to compute values without the students having any understanding of what they are doing. When a explanation is finally given, it is so unmotivated and disjointed as to be useless. Also, the degree of sophistication here is considerably below grade level. This is a totally inadequate coverage of these Standards. Sincerely yours, Steve Kerckhoff, Wayne Bishop Jane Friedman, Yat-Sun Poon
{"url":"http://www.csun.edu/~vcmth00m/skills.html","timestamp":"2014-04-18T11:08:22Z","content_type":null,"content_length":"12657","record_id":"<urn:uuid:abfa548a-3229-4878-aedb-a01d9aadac8c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionStudy area and dataStudy areaAirborne laser scanner dataForest inventory dataData pre-processingALS data pre-processingCo-registration of the forest inventory data to the ALS dataStem volume estimationMultiplicative stem volume modelRegression analysesCorrection of the logarithmic transformation biasValidation of the stem volume modelResultsSample plot sizeEffects of different ALS point densities and acquisition timesDiscussionStem volume modelForest inventory dataConclusionsReferencesFigures and Tables The multiplicative regression model used for estimating stem volume [20] requires as input the forest inventory data and different statistical quantities derived from first- and last-echo ALS data including mean values, coefficients of variation, percentiles of heights, and canopy densities for several height intervals. The calculations are based on circular sample plot areas, whereas different diameters are evaluated. The model is formulated as: Y = β 0 h 0 , f β 1 h 10 , f β 2 … h 90 , f β 10 h 0 , 1 β 11 h 10 , 1 β 12 … h 90 , 1 β 20 h mean , f β 21 h mean , 1 β 22 × h cv , f β 23 h cv , 1 β 24 d 0 , f β 25 d 1 , f β 26 … d 9 , f β 34 d 0 , 1 β 35 d 1 , 1 β 36 … d 9 , 1 β 44where Y is the v[stem,fi] in m^3ha^-1; h[0,f], h[10,f], …, h[90,f] are percentiles of the first-echo laser canopy heights for 0%, 10%, …, 90% in m; h[0,1], h[10,1], …, h[90,1] are percentiles of the last-echo laser canopy heights for 0%, 10%, …, 90% in m; h[mean,f], h[mean,1] are mean values of the first- and last-echo canopy heights in m; h[cv,f], h[cv,1] are coefficients of variation of the first- and last-echo canopy heights in percent. As suggested by Næsset [20] and Nilson [32] first- and last-echo returns with heights less than 2 m are classified as ground hits, stones, shrubs, etc., and therefore, the heights are set to zero; d[0,f], d[1,f], …, d[9,f] are cumulative canopy densities of first-echo laser points for the fraction no. 0, 1, …, 9; d[0,1], d[1,l], …, d[9,1] are cumulative canopy densities of last-echo laser points for the fraction no. 0, 1, …, 9. The 10 fractions are of equal height and are calculated by dividing the difference between the highest and the lowest (2 m) canopy height by 10. For each fraction no. 0, 1, …, 9 (> 2 m) the proportion of laser hits above the fraction limits to the total number of laser hits were calculated for both, first- and last-echo points. The model parameter of Eq. (7) can be estimated with the linear form of the equation using logarithmic variables as shown in Eq. (8). ln Y = ln β 0 + β 1 ln h 0 , f + β 2 ln h 10 , f + … + β 10 ln h 90 , f + β 11 ln h 0 , 1 + β 12 ln h 10 , 1 + … + β 20 ln h 90 , 1 + β 21 ln h mean , f + β 22 ln h mean , 1 + β 23 ln h cv , f + β 24 ln h cv , 1 + … + β 25 ln d 0 , f + β 26 ln h 1 , f + … + β 34 ln h 9 , f + β 35 ln d 0 , 1 + β 36 ln h 1 , 1 + … + β 44 ln d 9 , 1 For the multiple regression analysis the multiplicative model (Eq. (7)) is transformed into a logarithmic scale (Eq. (8)). This common procedure has the advantage that the complex multiplicative model can be expressed by a linear one. Thus, a simple least square method can be applied for estimating the model parameters. For the final prediction of the stem volume a conversion of the log-linear model parameters to the original untransformed scale is necessary. This procedure introduces a bias because large values are compressed on the logarithmic scale and thus tend to have less leverage than small ones [33]. Thus, the bias is not an arithmetic constant but a constant fraction of the estimated value. In the current study the empirical ratio estimator (RE) approach developed by Snowdon [34] is used for correcting the bias. As demonstrated by Snowdon [34] this method is more reliable than corrections estimated from variance as for example described by Baskerville [35] or Sprugel [36]. The correction factor for the empirical ratio estimator is calculated as shown in Eq. (9). R E = ∑ i = 1 n v stem , fi , i / ∑ i = 1 n v stem , iwhere RE is the correction factor; v[stem,fi,][i] are the observed values (stem volume calculated from the FI data) in m^3ha^−1; v[stem,][i] are the predicted values (stem volume) in m^3ha^−1, retransformed back to the original untransformed scale without bias correction; and n is the number of sample plots. Finally, the bias corrected, predicted stem volumes are calculated by multiplying v[stem,][i] with RE. Due to the variable sizes of the forest inventory sample plots the appropriate sample plot size to extract the ALS data is not known in advance. Therefore, several diameters are used and evaluated. The sample plot size, which leads to the highest accuracy, is used to analyze the effects of varying ALS properties. As mentioned in section 2.2, the study area is covered by two ALS data sets with different point densities and acquisition times. The winter ALS data cover 92 and the summer ALS data cover 64 from the 103 forest inventory sample plots, which could clearly be co-registered. Within the overlapping area of the two data sets 52 forest inventory sample plots are available, which are used to analyses the effects of varying acquisition times. As the summer ALS data have a point density of 2.7 p/m^2 a thinning of 66% is applied to reduce the density to those of the winter data (0.9 p/m^2). Based on the sorted acquisition time the thinning is done by a systematic removal of points. To understand the impact of different ALS point densities the original and the thinned data are analyzed for each acquisition time separately. Thus, the results can easily compared within each flight campaign as the flying height, the local incidence angles, the acquisition times, the sensor characteristics, and the used sample plots are similar for the original and the thinned data set. For the validation of the calibrated models a cross-validation procedure is used, where for each step one observation is excluded for the calibration of the model. Since the model is fitted n times, where n is the number of observations, the prediction error for each excluded observation can be calculated. Finally, statistical parameters of the prediction errors can be calculated including the range, the mean, and the standard deviation of the errors.
{"url":"http://www.mdpi.com/1424-8220/7/8/1559/xml","timestamp":"2014-04-21T06:02:36Z","content_type":null,"content_length":"137652","record_id":"<urn:uuid:fdbb500b-d05e-4fa3-b51d-179d7231934c>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
., Suhartono (2010) KOMPUTASI SIRKUIT DENGAN POLINOMIAL PEMBANGKIT. PROSIDING SEMINAR NASIONAL ILMU KOMPUTER UNIVERSITAS DIPONEGORO 2010 . Full text not available from this repository. Generator polynomial is important tool for solving counting problem. This paper will discuss how the generator polinomial is used to solve the model counting problem, in this case the circuit computation problem. Problem of the dividing circuit needs to determine the quotient and remainder term : V(X) /g(X) = q(X) + r(X)/g(X). The operational steps of the circuit are as follows: The first r shift enter the most significant coefficients of V(X), after r th shift, the quotient output is gr -1 Vm; this is the highest-order term in the quotient. For each quotient qi the polynomial qi g(X) must be substracted from dividend. At each shift of the register, the difference is shifted one stage; the highest –order term is shifted out, while the next significant coefficient of V(X) is shifted in. After m+1 total shifts into the register, the quotient has been serially at the output and the remainder resides in the register. Keywords: Generator Polynomial, circuit, shift, register Repository Staff Only: item control page
{"url":"http://eprints.undip.ac.id/24580/","timestamp":"2014-04-17T07:48:13Z","content_type":null,"content_length":"15661","record_id":"<urn:uuid:8970d789-3c95-4f9b-84d1-cbfa735abe47>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
Factorising expressions June 20th 2013, 12:51 AM #1 Feb 2013 South Africa Hi guys, I was just wondering if any of you could show me the method you use for factorising algebraic expressions and which is the easiest and most convenient method to use. For example, How would you factorise the following: 2x (x-4)-3 (4-x) I need a good method for the exams..I need a simple yet fast method to factorise without getting confused.. Any help would be appreciated! Re: Factorising expressions You need to recognise that (4-x) = -(x - 4). Then the factorisation should be easy. Re: Factorising expressions Firstly there are no shortcuts and we should also not try and look for those without understanding the basic concept. There is no standard way of factorizing ll polynomials. For example be can factorize quadratics by splitting middle term, rearranging terms, making perfect square etc. for cubic polynomials we first try and find if the expression is a cube or not, we thereafter try and find the first linear factor by hit and trial using remainder theorem etc. i hope that helps you June 20th 2013, 01:36 AM #2 June 20th 2013, 05:42 AM #3 Super Member Jul 2012
{"url":"http://mathhelpforum.com/algebra/219996-factorising-expressions.html","timestamp":"2014-04-18T07:46:18Z","content_type":null,"content_length":"37029","record_id":"<urn:uuid:4dc1ea27-7408-40d1-b32e-3858cd5df443>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
Convert pascal to inch of water column - Conversion of Measurement Units ›› Convert pascal to inch of water column ›› More information from the unit converter How many pascal in 1 inch of water column? The answer is 249.088908333. We assume you are converting between pascal and inch of water column. You can view more details on each measurement unit: pascal or inch of water column The SI derived unit for pressure is the pascal. 1 pascal is equal to 0.00401463078662 inch of water column. Note that rounding errors may occur, so always check the results. Use this page to learn how to convert between pascals and inches water column. Type in your own numbers in the form to convert the units! ›› Definition: Pascal The pascal (symbol Pa) is the SI unit of pressure.It is equivalent to one newton per square metre. The unit is named after Blaise Pascal, the eminent French mathematician, physicist and philosopher. ›› Metric conversions and more ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more! This page was loaded in 0.0038 seconds.
{"url":"http://www.convertunits.com/from/pascal/to/inch+of+water+column","timestamp":"2014-04-21T07:08:39Z","content_type":null,"content_length":"20221","record_id":"<urn:uuid:27ba301e-864d-4b77-a959-c7347410dcaa>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Elmora, NJ Statistics Tutor Find an Elmora, NJ Statistics Tutor ...I took Biostatistics in the graduate level during my Master's program and received an A. It dealt with applying statistical methods in biology and medicine. Topics covered were Statistical analysis for cross-sectional studies and case-control studies. 18 Subjects: including statistics, calculus, algebra 1, algebra 2 ...I was eager to become a teacher when I was young and would still love to talk with friends from different backgrounds. I graduated from a Chinese university with a Bachelor's degree in English and Rutgers University with a Master's degree in management.As the winner of many translation and inter... 9 Subjects: including statistics, English, reading, Chinese ...Everyone has different study skills and I can adapt to them. I use a lot of technology to help study and make a plan that makes an individual very independent. I am a math major and a job description said that they needed help with elementary praxis math section. 27 Subjects: including statistics, reading, Spanish, geometry ...In all of my sessions, I emphasize the importance of creating good study habits. With what I learned in my psychology classes and long academic career, I teach students the most effective ways to study and retain the material. I graduated with a BA in Economics from Rutgers University with an A in Econometrics. 23 Subjects: including statistics, English, calculus, accounting ...I took and passed (the first time) Praxis Elementary Ed Content Knowledge (AKA Praxis II) with a score of 193 and received an ETS Certificate of Excellence for my score. I also passed Middle School Math Praxis with a score of 189, and Middle School Science with a score of 184. (Potential range ... 58 Subjects: including statistics, English, physics, writing Related Elmora, NJ Tutors Elmora, NJ Accounting Tutors Elmora, NJ ACT Tutors Elmora, NJ Algebra Tutors Elmora, NJ Algebra 2 Tutors Elmora, NJ Calculus Tutors Elmora, NJ Geometry Tutors Elmora, NJ Math Tutors Elmora, NJ Prealgebra Tutors Elmora, NJ Precalculus Tutors Elmora, NJ SAT Tutors Elmora, NJ SAT Math Tutors Elmora, NJ Science Tutors Elmora, NJ Statistics Tutors Elmora, NJ Trigonometry Tutors Nearby Cities With statistics Tutor Bayway, NJ statistics Tutors Chestnut, NJ statistics Tutors Elizabeth, NJ statistics Tutors Linden, NJ statistics Tutors Midtown, NJ statistics Tutors North Elizabeth, NJ statistics Tutors Parkandbush, NJ statistics Tutors Peterstown, NJ statistics Tutors Roselle, NJ statistics Tutors Townley, NJ statistics Tutors Tremley, NJ statistics Tutors Union Center, NJ statistics Tutors Union Square, NJ statistics Tutors Weequahic, NJ statistics Tutors Winfield Park, NJ statistics Tutors
{"url":"http://www.purplemath.com/Elmora_NJ_statistics_tutors.php","timestamp":"2014-04-17T07:43:15Z","content_type":null,"content_length":"24118","record_id":"<urn:uuid:55d69fc1-f73b-4900-a38d-98acacc9ea7e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Recursive function to develop permutations Hung Jung Lu hungjunglu at yahoo.com Fri Oct 22 07:09:03 CEST 2004 aleaxit at yahoo.com (Alex Martelli) wrote: > def permute(Xs, N): > if N > 0: > for x in Xs: > for sub in permute(Xs, N-1): > yield [x]+sub > else: > yield [] > If you do choose an if/else it seems to me things are more readable when > you arrange them this way (rather than with if N<=0, so the for in the > else). Matter of tastes, of course. For an unreadable voodoo version, here is a one-liner: permute = lambda Xs, N: reduce(lambda r, a: [p + [x] for p in r for x in Xs], range(N), [[]]) print permute(['red', 'white', 'blue', 'green'], 3) Actually, we should probably not call it "permute". In combinatorics, "permutation" usually is reserved for sorting operation on a given combination. However, I don't have a good name for the described operation, either. Some people would probably call it "sequential draw with replacement". It's more like "combination" as in the case of a "combination lock", or as in "slot-machine combinations". (46 entries in Google. Zero entry for "slot-machine permutations", quoted search For simple permutation on a given list, the voodoo version is: permutations = lambda x: reduce(lambda r, a: [p[:i] + [a] + p[i:] for p in r for i in range(len(p)+1)], x[1:], [x[:1]]) print permutations(['a', 'b', 'c']) No if/else statements. No explicit recursion. And it works for zero-length lists as well. Now, if someone could do the general P(n, k) and C(n, k) versions, I'd appreciate it. :) Hung Jung More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2004-October/268694.html","timestamp":"2014-04-21T15:54:11Z","content_type":null,"content_length":"4357","record_id":"<urn:uuid:fef3d782-274b-4d17-ae14-65630c3b5e59>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Eigenvalues for toral Anosov automorphisms up vote 4 down vote favorite It is well known that on every $d$-dimensional torus there exists linear Anosov automorphisms. My question is the following: Given $k< d$ does there exists a linear Anosov automorphism of $\mathbb{T}^d$ with exactly $k$ eigenvalues smaller than $1$? If true (which I expect), does there exists an \emph{irreducible} linear Anosov automorphism of $\mathbb{T}^d$ with exactly $k$ eigenvalues smaller than $1$? This can be phrased in terms of matrices with integer coeficients (please add the corresponding relevant tags) as: Given $k< d$ does there exists a matrix in $SL(d,\mathbb{Z})$ such that all eigenvalues have modulus different from $1$ and $k$ of them are of modulus smaller than $1$? What about if the characteristic polynomial is irreducible over $\mathbb{Q}$?. Some relevant related information can be found in this paper (http://arxiv.org/pdf/1009.2994v2.pdf) where some results of W. Duke, Z. Rudnick, P. Sarnak as well as of Nevo and Sarnak are refered to. ds.dynamical-systems anosov-systems algebraic-number-theory add comment 2 Answers active oldest votes Given $k < d$, one can always construct a monic polynomial irreducible over $\mathbb Q$ with exactly $k$ roots less than 1 in modulus and $d-k$ roots greater than 1 in modulus. This follows from the general construction of algebraic units, namely, each group of units of an algebraic field contains a unit with a given $k$ -- see, e.g., [Borevich and Shafarevich]. up vote 2 Then you can simply take the companion matrix of such a polynomial. down vote accepted Or do you need an explicit construction? This seems to answer my question. However, sorry for my ignorance, but is it obvious that one can make such polynomial with integer entries and such that it yields a matrix with integer coeficients and determinant equal to 1? – rpotrie Mar 18 '12 at 17:45 1 1. I suggest you read about the Diriclet unit theorem. A good starting point is the article on Wikipedia, I guess: en.wikipedia.org/wiki/Dirichlet%27s_unit_theorem 2. Once you have a polynomial $p$ with the leading coefficient 1 and the constant term $\pm1$, you may construct the companion matrix from this polynomial (en.wikipedia.org/wiki/Companion_matrix) whose determinant is also $\pm1$ and whose characteristic polynomial is exactly $p$. Hope this helps. – Nikita Sidorov Mar 18 '12 at 18:17 Thanks. I guess that I don't have problem with 2. But for 1. it seems that I need to understand a bit more why should the existence of units gives the desired integer polynomials with leading coeficient 1 and constant term $\pm 1$. I will accept this answer since it is my ignorance which does not allow me to fully understand the answer yet. – rpotrie Mar 18 '12 at 18:30 1 If you really need an explicit example for your research, I could ask around. I think I know a guy who might know this. – Nikita Sidorov Mar 18 '12 at 19:08 No, thanks. The question is for a talk I must give in a seminar about what is known on Anosov diffeomorphisms and this question naturally came up. With the references you give I already understand much more, and toghether with the Pisot number commented below I already can answer the first question (about existence of Anosov with any splitting, maybe not irreducible). – rpotrie Mar 18 '12 at 19:48 add comment This is only a partial answer, which I shall delete if I find a better one. Every pair $(k,d)$ of the form $$d=\frac12\phi(n),\qquad k={\rm card}(\frac{n}{6}\le j \le\frac{n}{2},j\wedge n= 1)$$ is OK: take the cyclotomic polynomial $\Phi_n$ and form the irreducible polynomial $P_n\in{\mathbb Z}[X]$ defined by $$\Phi_n(t)=t^{\frac{n}{2}}P_n\left(t+\frac1t\right).$$ The roots of $P_n$ are the numbers $2\cos\frac{2j\pi}{n}$ with $j\wedge n=1$, smaller than $1$ if and only if $\frac{n}{6}\le j \le\frac{n}{2}$. up vote 2 down vote If instead $k=d-1$, take any Pisot number. Edit (after Nikita's comment below): One may take the companion matrix of $X^d-X^{d-1}-\cdots-X-1$. Its only root of modulus greter than $1$ is a Pisot number, also called a multinacci number. If $d=2$, this is just the golden ratio, at the basis of the Fibonacci sequence, hence the `word' multinacci. Thanks, this is useful. I guess it should not be deleted, some key words are of importance, at least for me (in order to look into references is important to have those key words). – rpotrie Mar 18 '12 at 17:42 2 Denis, for $k = d -1$ one can take the companion matrix for $x^d-x^{d-1}-\dots-x-1$. This is a Pisot number (called the multinacci number), hence $k=d-1$. – Nikita Sidorov Mar 18 '12 at add comment Not the answer you're looking for? Browse other questions tagged ds.dynamical-systems anosov-systems algebraic-number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/91544/eigenvalues-for-toral-anosov-automorphisms/91554","timestamp":"2014-04-16T04:58:10Z","content_type":null,"content_length":"64778","record_id":"<urn:uuid:8bcb8946-06b7-417d-9a17-fd22de78492a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
Returns sum of polynomials; a1 + a2. Input polynomials are represented as an array_like sequence of terms or a poly1d object. a1 : {array_like, poly1d} Polynomial as sequence of terms. a2 : {array_like, poly1d} Polynomial as sequence of terms. out : {ndarray, poly1d} Array representing the polynomial terms.
{"url":"http://docs.scipy.org/doc/numpy-1.3.x/reference/generated/numpy.polyadd.html","timestamp":"2014-04-20T10:59:22Z","content_type":null,"content_length":"7392","record_id":"<urn:uuid:3fbacab0-9417-415e-9f02-d4079ffc375b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Number of target values in the one prediction up vote 1 down vote favorite I use python's scikit-learn module for predicting some values in the CSV file. I am using Random Forest Regressor to do it. As example, i have 8 train values and 3 values to predict - which of codes i must use? As a values to be predicted, I have to give all target values at once (A) or separately (B)? Variant A: #Readind CSV file dataset = genfromtxt(open('Data/for training.csv','r'), delimiter=',', dtype='f8')[1:] #Target value to predict target = [x[8:11] for x in dataset] #Train values to train train = [x[0:8] for x in dataset] #Starting traing rf = RandomForestRegressor(n_estimators=300,compute_importances = True) rf.fit(train, target) Variant B: #Readind CSV file dataset = genfromtxt(open('Data/for training.csv','r'), delimiter=',', dtype='f8')[1:] #Target values to predict target1 = [x[8] for x in dataset] target2 = [x[9] for x in dataset] target3 = [x[10] for x in dataset] #Train values to train train = [x[0:8] for x in dataset] #Starting traings rf1 = RandomForestRegressor(n_estimators=300,compute_importances = True) rf1.fit(train, target1) rf2 = RandomForestRegressor(n_estimators=300,compute_importances = True) rf2.fit(train, target2) rf3 = RandomForestRegressor(n_estimators=300,compute_importances = True) rf3.fit(train, target3) Which version is correct? Thanks in advance! add comment 2 Answers active oldest votes "8 train values and 3 values" is probably best expressed as "8 features and 3 target variables" in usual machine learning parlance. Both variants should work and [DEL:yield the similar predictions:DEL] as RandomForestRegressor has been made to support multi output regression. up vote 2 down vote [DEL:The predictions won't be exactly the same as RandomForestRegressor is a non deterministic algorithm though. But on average the predictive quality of both approaches accepted should be the same.:DEL] Edit: see Andreas answer instead. But, why in case (A) i have much more accurate predictions than in case (B)? – Emkan Jan 25 '13 at 17:08 No idea. I thought it was doing the same internally. Maybe it's not the case. I will to check the source code. – ogrisel Jan 25 '13 at 17:40 add comment Both are possible, but do different things. The first learns independent models for the different entries of y. The second learns a joint model for all entries of y. If there are meaningful relations between the entries of y that can be learned, the second should be more accurate. up vote 4 down vote As you are training on very little data and don't regularize, I imagine you are simply overfitting in the second case. I am not entirely sure about the splitting criteria in the regression case but it takes much longer for a leaf to be "pure" if the label-space is three dimensional than if it is just one-dimensional. So you will learn more complex models, that are not warranted by the little data you have. Indeed, that would make sense. Thanks Andreas! – ogrisel Jan 25 '13 at 18:06 So the only way I can get SO karma seems to be that you don't know the answer ^^ That way I'll never catch up with larsmans ;) – Andreas Mueller Jan 25 '13 at 18:52 Actually i want to predict 24 values. I have 11 values to train. Every training variable have 32000 samples. I am predicting output of some chemical process - and yes there are meaningful relations between the entries (sum of all 24 outputs must be = 100), What can you recommend me solve this problem? – Emkan Jan 25 '13 at 19:40 1 Try both an pick the method that works best by measuring the score you want to optimize using cross validation. – ogrisel Jan 26 '13 at 14:57 add comment Not the answer you're looking for? Browse other questions tagged python statistics machine-learning scikit-learn random-forest or ask your own question.
{"url":"http://stackoverflow.com/questions/14506615/number-of-target-values-in-the-one-prediction","timestamp":"2014-04-18T14:11:58Z","content_type":null,"content_length":"75505","record_id":"<urn:uuid:eb1baf0d-0bd5-4b26-be80-0ba97f68946f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: What is the standard model for PA? Vladimir Sazonov sazonov at logic.botik.ru Mon Mar 23 17:11:15 EST 1998 Torkel Franzen wrote: > Vladimir Sazonov says: > >I think that after realizing that the natural numbers *may be > >seen* as constituting a very indeterminate "set" it is difficult > >to return to older, I would say, oversimplified picture as if > >nothing was happened. At least this is my case. Probably you > >are able to be so "solid" in your opinion to not change your > >belief (or what it is) in standard model and simultaneously to > >realize possible vagueness of the same(?) model. > Well, I would say that considerations regarding feasibility inevitably > lead to vague concepts (that may yet be mathematically and philosophically > interesting and useful), but that this doesn't mean that the idealized > version of the natural numbers - i.e. the natural numbers as ordinarily > understood - is unclear or indeterminate. What about "vagueness of the same(?) model"? It is normal that an intuitive concept is vague. However, our *mathematical* formalization of this concept should be possibly as rigorous and determinate as e.g. the formalization by Peano Arithmetic of even more vague concept of "all" natural numbers implicitly involving the concept of feasible numbers. > >By the way, do you see now "indeterminateness of arbitrary > >property" of natural numbers in rather short segment > >0,1,2,...,1000 of natural numbers as in the case of "all" > >numbers or this set is still completely determinate for you? > Already the notion of "arbitrary property of 0" is indeterminate. > The set 0,1,2....1000 is determinate, though, as is the set > 0,1,2,... of all natural numbers. I'm not prepared to defend the > notion of "arbitrary subset of the natural numbers" as determinate. Let me formulate this question more definitely. Do you see now indeterminateness of the powerset of {1,2,...,1000} or, alternatively, of the set 2^1000={0,1}^1000 of all finite binary strings of the length 1000? Actually this is rather unclear point. I will mainly present my very informal *feeling* on what may happen here. Let us put aside Peano Arithmetic (PA) which neglects completely feasibility and physical realizability concepts. In particular we cannot rely on the evidently infeasible process of creating elements of the above set in lexicographical order. It is clear that only some strings of this "set" exist(ed) or will be realized in the (extremely indeterminate) future in our material world. Anyway, I am not sure that any our formal theory, even PA, can fix completely this set {0,1}^1000. It seems that there is some analogy with the case of continuum (2^N, the powerset of N={1,2,...}) which proves to be not fixed by ZFC due to G"odel and Cohen results. Is it "true" that (A) the "set" of "simple" such strings (i.e. those constructed by a simple algorithm) like 00...0 (only zeros), 11...1 (only ones), 0101...01 (alternating zeros and ones), etc. exhaust "all" strings of the length 1000 (B) we need to use inevitably a coin? Does any abstract concept of random choice really fix the above set of strings? Note, that the above alternative (A) or (B) can be made more precise as follows: Are "all" binary strings of the length 1000 generated by some fixed *feasibly computable* function f:{1}^* -> {0,1}^1000? Here {1}^* denotes the "set" of "all" finite unary strings of *feasible* length. Put other way, is this set "enough" for our hypothetical theory of feasibly finite binary strings (as it was the case with G"odel constructible sets in ZFC) or we need inevitably "non-feasibly-constructive" or "random" strings? Only after (and simultaneously with) hard work on and with formalization of feasibility we could get proper understanding of this concept and related questions such as above. . > >Our understanding and "justification" goes *in terms*, and via > >numerous repetitions of using some formal rules. Otherwise, how > >to teach children to mathematics? > How we actually learn arithmetic is a difficult question. I don't > find any ideas to the effect that it *must* happen in some particular > way convincing. Certainly rules play a large role, but what is it to > learn a rule, and what conditions must be satisfied if we are to > be able to learn a rule? I agree that these questions are indeed difficult. Actually, I had intention to say by the words "goes *in terms*" and "via" that formal rules and informal understanding cannot be separated one from another and their roles are at least equal. I.e., NOT "first informal understanding of a concept, and only then creation and justification of related rules". Therefore the role of formal rules -- of our *subjective* but of course not completely free creations -- is at least higher than it is sometimes considered in discussions on f.o.m. Vladimir Sazonov Program Systems Institute, | Tel. +7-08535-98945 (Inst.), Russian Acad. of Sci. | Fax. +7-08535-20566 Pereslavl-Zalessky, | e-mail: sazonov at logic.botik.ru 152140, RUSSIA | http://www.botik.ru/~logic/SAZONOV/ More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-March/001725.html","timestamp":"2014-04-19T09:27:00Z","content_type":null,"content_length":"8186","record_id":"<urn:uuid:8facc22c-a567-4264-8ea7-343fba40c09a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
East Brunswick Calculus Tutor Find an East Brunswick Calculus Tutor ...I also work part time at tutoring company. I have experience tutoring students from kindergarten to advanced mathematics at the undergraduate level. I believe that developing a "number sense" is very important to succeed in math and spend more time developing this sense rather than having students memorize formulas and algorithms. 16 Subjects: including calculus, statistics, geometry, algebra 1 I am originally from Argentina where I received both my BS and PhD degrees in Physics. I spent few months in Paris, France and has lived in US for the last 10 years. I am an accomplished and experienced teacher and tutor in Physics and Mathematics with more than 10 years of experencie. 9 Subjects: including calculus, physics, algebra 1, algebra 2 ...Please do not hesitate to contact me if I could be of any help. I look forward to working with you. Sincerely, DheerajI taught Algebra 1 at the 9th grade level during my time as a teacher. 26 Subjects: including calculus, writing, statistics, geometry ...I've been using apple computers since the first Macintosh classic came out. Although I also have PC's at home and at work, Mac's are my main tools for graphic design, movie editing, and web design. I consistently follow the technological advances with MacOS, as well as it's connection with iPhones, iPads, and iTunes. 83 Subjects: including calculus, chemistry, physics, statistics ...Tutoring is not lecturing, and I adapt to the student, working through problems with the student, and making up similar problems to make sure the student understands. I have a very strong academic math background, including a math degree from Johns Hopkins. I have over 3000 hours tutoring experience. 32 Subjects: including calculus, English, writing, geometry
{"url":"http://www.purplemath.com/East_Brunswick_calculus_tutors.php","timestamp":"2014-04-17T04:27:27Z","content_type":null,"content_length":"24185","record_id":"<urn:uuid:00a83ce5-c834-48fa-87a1-a494319779b4>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Operations on integer matrices known to return integers [Numpy-discussion] Operations on integer matrices known to return integers Six Silberman silberman.six@gmail.... Tue Jul 10 15:52:45 CDT 2012 We now have >>> a = array([[1, 2], [3, 4]], dtype=int8) array([[1, 2], [3, 4]], dtype=int8) >>> d = linalg.det(a) >>> d >>> d.dtype This is at least partly due to use of LU factorization in computing the determinant. Some operations on integer matrices always return integers. It occurred to me that it might be nice to be able to ask functions performing such operations to always return an integer. We could do >>> print d but this seems a little unfortunate. If this has been discussed many times previously and you just groaned inwardly, or if this is simply outside the scope of numpy, please let me know and I'll move along. Thanks very much, More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-July/063258.html","timestamp":"2014-04-17T11:09:49Z","content_type":null,"content_length":"3497","record_id":"<urn:uuid:787bfbc5-fa85-45fe-964a-e1ed3956762f>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
When are the units of R[x] exactly the units of R? MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. I (Anton) have edited this question to be the question Pete and Zeb discuss in the first few comments. What conditions on a ring $R$ imply that the units of $R[x]$ are exactly the units of $R$? up vote 1 down vote favorite ra.rings-and-algebras polynomials show 7 more comments I (Anton) have edited this question to be the question Pete and Zeb discuss in the first few comments. What conditions on a ring $R$ imply that the units of $R[x]$ are exactly the units of $R$? If $R$ is a commutative ring, then by the following result, the answer is "if and only if $R$ is reduced." If $R$ is a commutative ring, then $a_0+a_1x+\cdots + a_nx^n\in R[x]$ is a unit if and only if $a_0$ a unit in $R$ and $a_i$ is nilpotent for $i>0$. Proof. One direction is easy. Any polynomial of the given form is a unit because the sum of a unit and a nilpotent element is always a unit. The other direction isn't too hard if $R$ is a domain (the product of non-zero elements is always non-zero). If $g=b_0+\cdots b_mx^m$ (with $b_m\neq 0$) is the inverse of $f=a_0+\ cdots+a_nx^n$ (with $a_n\neq 0$), then the highest order term of $1=f\cdot g$ is $a_nb_mx^{n+m}$, so we must have $n=m=0$ and $a_0$ invertible (with inverse $b_0$) up vote 17 down vote For the general case, suppose $a_0+\cdots +a_nx^n$ is a unit. Reducing modulo $x$, we must get a unit in $R[x]/(x)\cong R$, so $a_0$ must be a unit. Reducing modulo any prime $\mathfrak p\ subseteq R$, we get a unit in $(R/\mathfrak p)[x]$. Since $R/\mathfrak p$ is a domain, the previous paragraph shows that $a_i\in \mathfrak p$ for all $i>0$ and all primes $\mathfrak p$. Since the intersection of all primes is the nilradical, each $a_i$ must be nilpotent. A more "bare hands" elementary proof is given in Ex. 1.32 of Lam's Exercises in Classical Ring Theory. He also gives counterexamples to both implications if $R$ is not assumed commutative and mentions a really interesting related question. If $I\subseteq R$ is an ideal all of whose elements are nilpotent and $a_i\in I$, then does it follow that $1+a_1x+\cdots +a_nx^n$ is a unit in $R[x]$? If you can prove that it does, it would imply the Köthe conjecture, a famous problem in ring theory. add comment If $R$ is a commutative ring, then by the following result, the answer is "if and only if $R$ is reduced." If $R$ is a commutative ring, then $a_0+a_1x+\cdots + a_nx^n\in R[x]$ is a unit if and only if $a_0$ a unit in $R$ and $a_i$ is nilpotent for $i>0$. Proof. One direction is easy. Any polynomial of the given form is a unit because the sum of a unit and a nilpotent element is always a unit. The other direction isn't too hard if $R$ is a domain (the product of non-zero elements is always non-zero). If $g=b_0+\cdots b_mx^m$ (with $b_m\neq 0$) is the inverse of $f=a_0+\cdots+a_nx^n$ (with $a_n\neq 0$), then the highest order term of $1=f\cdot g$ is $a_nb_mx^{n+m}$, so we must have $n=m=0$ and $a_0$ invertible (with inverse $b_0$) For the general case, suppose $a_0+\cdots +a_nx^n$ is a unit. Reducing modulo $x$, we must get a unit in $R[x]/(x)\cong R$, so $a_0$ must be a unit. Reducing modulo any prime $\mathfrak p\subseteq R$, we get a unit in $(R/\mathfrak p)[x]$. Since $R/\mathfrak p$ is a domain, the previous paragraph shows that $a_i\in \mathfrak p$ for all $i>0$ and all primes $\mathfrak p$. Since the intersection of all primes is the nilradical, each $a_i$ must be nilpotent. A more "bare hands" elementary proof is given in Ex. 1.32 of Lam's Exercises in Classical Ring Theory. He also gives counterexamples to both implications if $R$ is not assumed commutative and mentions a really interesting related question. If $I\subseteq R$ is an ideal all of whose elements are nilpotent and $a_i\in I$, then does it follow that $1+a_1x+\cdots +a_nx^n$ is a unit in $R[x]$? If you can prove that it does, it would imply the Köthe conjecture, a famous problem in ring theory.
{"url":"http://mathoverflow.net/questions/16821/when-are-the-units-of-rx-exactly-the-units-of-r/16827","timestamp":"2014-04-16T07:55:00Z","content_type":null,"content_length":"59895","record_id":"<urn:uuid:1746538f-06c2-4821-a721-f872544ccd39>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
Distributed selfish load balancing Results 1 - 10 of 27 - In PODC , 2006 "... Abstract There has been substantial work developing simple, efficient no-regret algorithms for a wideclass of repeated decision-making problems including online routing. These are adaptive strategies an individual can use that give strong guarantees on performance even in adversarially-changing envi ..." Cited by 47 (6 self) Add to MetaCart Abstract There has been substantial work developing simple, efficient no-regret algorithms for a wideclass of repeated decision-making problems including online routing. These are adaptive strategies an individual can use that give strong guarantees on performance even in adversarially-changing environments. There has also been substantial work on analyzing properties of Nash equilibria in routing games. In this paper, we consider the question: if each player in a rout-ing game uses a no-regret strategy, will behavior converge to a Nash equilibrium? In general games the answer to this question is known to be no in a strong sense, but routing games havesubstantially more structure. In this paper we show that in the Wardrop setting of multicommodity flow and infinitesimalagents, behavior will approach Nash equilibrium (formally, on most days, the cost of the flow will be close to the cost of the cheapest paths possible given that flow) at a rate that dependspolynomially on the players ' regret bounds and the maximum slope of any latency function. We also show that price-of-anarchy results may be applied to these approximate equilibria, and alsoconsider the finite-size (non-infinitesimal) load-balancing model of Azar [2]. - in Proc. 38th Ann. ACM. Symp. on Theory of Comput. (STOC , 2006 "... We study rerouting policies in a dynamic round-based variant of a well known game theoretic traffic model due to Wardrop. Previous analyses (mostly in the context of selfish routing) based on Wardrop’s model focus mostly on the static analysis of equilibria. In this paper, we ask the question whethe ..." Cited by 43 (8 self) Add to MetaCart We study rerouting policies in a dynamic round-based variant of a well known game theoretic traffic model due to Wardrop. Previous analyses (mostly in the context of selfish routing) based on Wardrop’s model focus mostly on the static analysis of equilibria. In this paper, we ask the question whether the population of agents responsible for routing the traffic can jointly compute or better learn a Wardrop equilibrium efficiently. The rerouting policies that we study are of the following kind. In each round, each agent samples an alternative routing path and compares the latency on this path with its current latency. If the agent observes that it can improve its latency then it switches with some probability depending on the possible improvement to the better path. We can show various positive results based on a rerouting policy using an adaptive sampling rule that implicitly amplifies paths that carry a large amount of traffic in the Wardrop equilibrium. For general asymmetric games, we show that a simple replication protocol in which agents adopt strategies of more successful agents reaches a certain kind of bicriteria equilibrium within a time bound that is independent of the size and the structure of the network but only depends on a parameter of the latency functions, that we call the relative slope. For symmetric games, this result has an intuitive interpretation: Replication approximately satisfies almost everyone very quickly. In order to achieve convergence to a Wardrop equilibrium besides replication one also needs an exploration component discovering possibly unused strategies. We present a - In Proc. 2nd Conference on Future Networking Technologies (CoNext , 2006 "... One major challenge in communication networks is the problem of dynamically distributing load in the presence of bursty and hard to predict changes in traffic demands. Current traffic engineering operates on time scales of several hours which is too slow to react to phenomena like flash crowds or BG ..." Cited by 20 (4 self) Add to MetaCart One major challenge in communication networks is the problem of dynamically distributing load in the presence of bursty and hard to predict changes in traffic demands. Current traffic engineering operates on time scales of several hours which is too slow to react to phenomena like flash crowds or BGP reroutes. One possible solution is to use load sensitive routing. Yet, interacting routing decisions at short time scales can lead to oscillations, which has prevented load sensitive routing from being deployed since the early experiences in Arpanet. However, recent theoretical results have devised a game theoretical re-routing policy that provably avoids such oscillation and in addition can be shown to converge quickly. In this paper we present REPLEX, a distributed dynamic traffic engineering algorithm based on this policy. Exploiting the fact that most underlying routing protocols support multiple equal-cost routes to a destination, it dynamically changes the proportion of traffic that is routed along each path. These proportions are carefully adapted utilising information from periodic measurements and, optionally, information exchanged between the routers about the traffic condition along the path. We evaluate the algorithm via simulations employing traffic loads that mimic actual Web traffic, i. e., bursty TCP traffic, and whose characteristics are consistent with self-similarity. The simulations quickly converge and do not exhibit significant oscillations on both artificial as well as real topologies, as can be expected from the theoretical results. "... We present efficient algorithms for computing approximate Wardrop equilibria in a distributed and concurrent fashion. Our algorithms are exexuted by a finite number of agents each of which controls the flow of one commodity striving to balance the induced latency over all utilised paths. The set of ..." Cited by 7 (4 self) Add to MetaCart We present efficient algorithms for computing approximate Wardrop equilibria in a distributed and concurrent fashion. Our algorithms are exexuted by a finite number of agents each of which controls the flow of one commodity striving to balance the induced latency over all utilised paths. The set of allowed paths is represented by a DAG. Our algorithms are based on previous work on policies for infinite populations of agents. These policies achieve a convergence time which is independent of the underlying network and depends mildly on the latency functions. These policies can neither be applied to a finite set of agents nor can they be simulated directly due to the exponential number of paths. Our algorithms circumvent these problems by computing a randomised path decomposition in every communication round. Based on this decomposition, flow is shifted from overloaded to underloaded paths. This way, our algorithm can handle exponentially large path collections in polynomial time. Our algorithms are stateless, and the number of communication rounds depends polynomially on the approximation quality and is independent of the topology and size of the network. "... We study here the effect of concurrent greedy moves of players in atomic congestion games where n selfish agents (players) wish to select a resource each (out of m resources) so that her selfish delay there is not much. Such games usually admit a global potential that decreases by sequential and se ..." Cited by 5 (0 self) Add to MetaCart We study here the effect of concurrent greedy moves of players in atomic congestion games where n selfish agents (players) wish to select a resource each (out of m resources) so that her selfish delay there is not much. Such games usually admit a global potential that decreases by sequential and selfishly improving moves. However, concurrent moves may not always lead to global convergence. On the other hand, concurrent play is desirable because it might essentially improve the system convergence time to some balanced state. The problem of “maintaining ” global progress while allowing concurrent play is exactly what is examined and answered here. We examine two orthogonal settings: (i) A game where the players decide their moves without global information, each acting “freely ” by sampling resources randomly and locally deciding to migrate (if the new resource is better) via a random experiment. Here, the resources can have quite arbitrary latency that is load dependent. (ii) An “organised” setting where the players are prepartitioned into selfish groups (coalitions) and where each coalition does an improving coalitional move. Here the concurrency is among the members of the coalition. In this second setting, the resources have latency functions that are only linearly dependent on the load, since this is the only case so far where a global potential exists. In both cases (i), (ii) we show that the system converges to an “approximate” equilibrium very fast (in - Computing Research Repository , 1992 "... We explore the fundamental limits of distributed balls-intobins algorithms, i.e., algorithms where balls act in parallel, as separate agents. This problem was introduced by Adler et al., who showed that non-adaptive and symmetric algorithms cannot reliably perform better than a maximum bin load of Θ ..." Cited by 5 (0 self) Add to MetaCart We explore the fundamental limits of distributed balls-intobins algorithms, i.e., algorithms where balls act in parallel, as separate agents. This problem was introduced by Adler et al., who showed that non-adaptive and symmetric algorithms cannot reliably perform better than a maximum bin load of Θ(loglogn/logloglogn) within the same number of rounds. We present an adaptive symmetric algorithm that achieves a bin load of two in log ∗ n + O(1) communication rounds using O(n) messages in total. Moreover, larger bin loads can be traded in for smaller time complexities. We prove a matching lower bound of (1−o(1))log ∗ n on the time complexity of symmetric algorithms that guarantee small bin loads at an asymptotically optimal message complexity of O(n). The essential preconditions of the proof are (i) a limit of O(n) on the total number of messages sent by the algorithm and (ii) anonymity of bins, i.e., the port numberings of balls are not globally consistent. In order to show that our technique yields indeed tight bounds, we provide for each assumption an algorithm violating it, in turn achieving a constant maximum bin load in constant time. As an application, we consider the following problem. Given a fully connected graph of n nodes, where each node needs to send and receive up to n messages, and in each round each node may send one message over each link, deliver all messages as quickly as possible to their destinations. We give a simple and robust algorithm of time complexity O(log ∗ n) for this task and provide a generalization to the case where all nodes initially hold arbitrary sets of messages. Completing the picture, we give a less practical, but asymptotically optimal algorithm terminating within O(1) rounds. All these bounds hold with high - In Proc. Symp. Dynamic Spectrum Access Networks (DySPAN , 2008 "... Abstract—In this paper we study an idealized model of load balancing for dynamic spectrum allocation (DSA) for secondary users using only local information. In our model, each agent is assigned to a channel and may reassign its load in a round based fashion. We present a randomized protocol in which ..." Cited by 4 (3 self) Add to MetaCart Abstract—In this paper we study an idealized model of load balancing for dynamic spectrum allocation (DSA) for secondary users using only local information. In our model, each agent is assigned to a channel and may reassign its load in a round based fashion. We present a randomized protocol in which the actions of the agents depend purely on some cost measure (e. g., latency, inverse of the throughput, etc.) of the currently chosen channel. Since agents act concurrently, the system is prone to oscillations. We show how this can be avoided guaranteeing convergence towards a state in which every agent sustains at most a certain threshold cost (if such a state exists). We show that the system converges quickly by giving bounds on the convergence time towards approximately balanced states. Our analysis in the fluid limit (where the number of agents approaches infinity) holds for a large class of cost functions. We support our theoretical analysis by simulations to determine the dependence on the number of agents. It turns out that the number of agents affects the convergence time only in a logarithmic fashion. The work shows under quite general assumptions that even an extremely large number of users using several hundreds of (virtual) channels can work in a DSA fashion. I. - in proceedings of the 3rd IEEE International Conference on e-Science and Grid Computing , 2007 "... Service-Oriented Architectures provide integration of interoperability for independent and loosely coupled services. Web services and the associated new standards such as WSRF are frequently used to realise such Service-Oriented Architectures. In such systems, autonomic principles of self-configurat ..." Cited by 3 (2 self) Add to MetaCart Service-Oriented Architectures provide integration of interoperability for independent and loosely coupled services. Web services and the associated new standards such as WSRF are frequently used to realise such Service-Oriented Architectures. In such systems, autonomic principles of self-configuration, self-optimisation, self-healing and selfadapting are desirable to ease management and improve robustness. In this paper we focus on the extension of the self management and autonomic behaviour of a WSRF container connected by a structured P2P overlay network to monitor and rectify its QoS to satisfy its SLAs. The SLA plays an important role during two distinct phases in the life-cycle of a WSRF container. Firstly during service deployment when services are assigned to containers in such a way as to minimise the threat of SLA violations, and secondly during maintenance when violations are detected and services are migrated to other containers to preserve QoS. In addition, as the architecture has been designed and built using standardised modern technologies and with high levels of transparency, conventional web services can be deployed with the addition of a SLA specification. 1. , 2008 "... Imitating successful behavior is a natural and frequently applied approach to trust in when facing scenarios for which we have little or no experience upon which we can base our decision. In this paper, we consider such behavior in atomic congestion games. We propose to study concurrent imitation dy ..." Cited by 3 (3 self) Add to MetaCart Imitating successful behavior is a natural and frequently applied approach to trust in when facing scenarios for which we have little or no experience upon which we can base our decision. In this paper, we consider such behavior in atomic congestion games. We propose to study concurrent imitation dynamics that emerge when each player samples another player and possibly imitates this agents ’ strategy if the anticipated latency gain is sufficiently large. Our main focus is on convergence properties. Using a potential function argument, we show that our dynamics converge in a monotonic fashion to stable states. In such a state none of the players can improve its latency by imitating somebody else. As our main result, we show rapid convergence to approximate equilibria. At an approximate equilibrium only a small fraction of agents sustains a latency significantly above or below average. In particular, imitation dynamics behave like fully polynomial time approximation schemes (FPTAS). Fixing all other parameters, the convergence time depends only in a logarithmic fashion on the number of agents. Since imitation processes are not innovative they cannot discover unused strategies. Furthermore, strategies may become extinct with non-zero probability. For the case of singleton games, we show that the probability of this event occurring is negligible. Additionally, we prove that the social cost of a stable state reached by our dynamics is not much worse than an optimal state in singleton congestion games with linear latency function. Finally, we discuss how the protocol can be extended such that, in the long run, dynamics converge to a Nash equilibrium. - in proceedings of the 15th International Conference on Advanced Computing and Communication (ADCOM , 2007 "... This paper presents an autonomic Web Service Resource Framework (WSRF) container that enables selfconfiguration using IBM’s autonomic computing (AC) architecture and resolves Quality of Service (QoS) problems through service migration. The migration manager bases its decisions on an overall health s ..." Cited by 2 (2 self) Add to MetaCart This paper presents an autonomic Web Service Resource Framework (WSRF) container that enables selfconfiguration using IBM’s autonomic computing (AC) architecture and resolves Quality of Service (QoS) problems through service migration. The migration manager bases its decisions on an overall health status metric (H-metric). The H-metric characterises the health of a service container. A unified AC sensor/effector interface, protocol, and metric summarization allows us to build up a hierarchical WSRF container structure to create a virtualized WSRF container. 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3693950","timestamp":"2014-04-21T16:21:20Z","content_type":null,"content_length":"45030","record_id":"<urn:uuid:04b408e0-cf4e-4e8f-bbda-9867027962d7>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
is this a truncated normal? [Archive] - Statistics Help @ Talk Stats Forum leo nidas 03-01-2010, 02:12 PM Hi there, After some algebra on a problem I have I derived the following distribution concering let's say the r.v. X. s^(-1)*g*φ((x-g^(-1)*a)/(s*g^(-1))) *(1/ Φ(a/s)) (1). where Φ is the cumulative normal and 1/k*φ((x-m)/k)=1/sqrt(2*π)*exp(-(x-m)^2/2*k^2), i.e. the normal density. My question is : Is the density in (1) some known distribution? I thought that it was a truncated normal distribution, but I am having second thoughts. I checked the wiki and the pdf of the truncated normal does not seem to match, is it? What would the mean and variance be? Thanx in advance for any answers!! I also have this form if it makes things easier..: 1/(s/b) φ( (x- (a/b^2)) /(s/b1) ) * ( 1/ Φ(a/(b*s) ) )
{"url":"http://www.talkstats.com/archive/index.php/t-11129.html","timestamp":"2014-04-24T15:25:27Z","content_type":null,"content_length":"4666","record_id":"<urn:uuid:d57b58fd-7bad-43c2-85f8-6f372fc269e8>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00315-ip-10-147-4-33.ec2.internal.warc.gz"}