text
stringlengths
256
16.4k
TEXT The TEXT function adds a text annotation to the current IDL Graphic. Example The following lines create the plot shown above. x = 0.01*(FINDGEN(201)) p1 = PLOT(x, EXPINT(1, x), '2', YRANGE=[0,2]) p2 = PLOT(x, EXPINT(2, x), 'r2', /OVERPLOT) p3 = PLOT(x, EXPINT(3, x), 'g2', /OVERPLOT) t1 = TEXT(0.3, 1.6, $ '$E_n(z) = \int_{1}^{\infty} ' + $ 'e^{-zt} t^{-n} dt, \Re(z)\ge 0$', $ /DATA, FONT_SIZE=14, FONT_STYLE='Italic') t2 = TEXT([0.4, 0.22, 0.1], [0.8, 0.58, 0.2], $ '$\it n = '+['1','2','3']+'$', /DATA) As a second example, the \overline{ } symbol draws a line above the specified characters. Create a plot with the following string: p = PLOT(/TEST, TITLE="$1/7 = 0.\ overline{142857}$") Additional Examples See Annotations examples for additional examples using the TEXT function. Syntax text = TEXT( X, Y, [Z,] String [, Format] [, Keywords=value][, Properties= value]) Keywords Keywords are applied only during the initial creation of the graphic. [, /DATA] [, /DEVICE] [, /NORMAL] [, /RELATIVE] [, TARGET= variable] Properties Properties can be set as keywords to the function during creation, or retrieved or changed using the "." notation after creation. ALIGNMENT, BASELINE, CLIP, COLOR, FILL_BACKGROUND, FILL_COLOR, FONT_COLOR, FONT_NAME, FONT_SIZE, FONT_STYLE, HIDE, NAME, ONGLASS, ORIENTATION, POSITION, STRING, TRANSPARENCY, UPDIR, UVALUE, VERTICAL_ALIGNMENT, WINDOW Methods Close ConvertCoord CopyWindow Delete Erase Order Print Refresh Rotate Save Scale Select Translate Note: The Rotate method only applies to text objects that are within the annotation layer, not the dataspace. For text objects in the dataspace, use the ORIENTATION property to rotate the text. Also, the Scale method is not available for text objects. To scale the text, use the FONT_SIZE property. Return Value The TEXT function returns a reference to the created text annotation. Use the returned reference to manipulate the annotation after creation by changing properties or calling methods. Arguments X Y Z The location where the text will be placed, in data, normal, or device coordinates. If X, Y, and Z are arrays, then an array of TEXT objects will be returned. String The text to be displayed. If the string is an array, each item displays on a separate line. You can add mathematical symbols and greek letters using a TeX-like syntax. For details see below. Format A string that sets the text color using short tokens. For example, to create red text, you would use the following: t = TEXT(0.5, 0.5, 'Hello', 'r') Keywords Keywords are applied only during the initial creation of the graphic. DATA Set this keyword if the input arguments are specified in data coordinates. Setting this keyword inserts the text into the current data space, otherwise the text is added to the annotation layer. DEVICE Set this keyword if the input arguments are specified in device coordinates (pixels). NORMAL Set this keyword if the input arguments are specified in normalized ( [0, 1] ) coordinates (the default). RELATIVE Set this keyword to indicate that the input arguments are specified in normalized [0,1] coordinates, relative to the axis range of the TARGET's dataspace. If the TARGET keyword is not specified, then setting /RELATIVE is the same as setting /NORMAL. Note: When using /RELATIVE, even though the coordinates are relative to the TARGET's dataspace, the graphic is added to the annotation layer, not to the dataspace. TARGET Set this keyword to the graphic object to use if points are specified in data coordinates. Properties ALIGNMENT A floating-point value between 0.0 and 1.0 that indicates the horizontal alignment of the text baseline. An alignment of 0.0 (the default) left-justifies the text at the given position; an alignment of 1.0 right-justifies the text, and an alignment of 0.5 centers the text. BASELINE A two- or three-element floating-point vector that sets the direction in which the baseline is to be oriented. Use this property in conjunction with UPDIR to specify the plane on which the text lies. The following table gives the commonly-used BASELINE values: [1.0, 0, 0] Parallel to +X axis (default) [0, 1.0, 0] Parallel to +Y axis [0, 0, 1.0] Parallel to +Z axis CLIP Set this property to 1 to clip portions of the text that lie outside of the dataspace range, or to 0 to disable clipping. The default is 1. This property is ignored unless the DATA property is set. COLOR Set this property to a string or RGB vector that specifies the text color. The default is "black". FILL_BACKGROUND A value of 1 fills the area inside the text box. FILL_COLOR Set this property to a string or RGB vector that specifies the fill color inside the text box. FONT_COLOR Set this property to a string or RGB vector that specifies the text color. The default value is "black". FONT_NAME A string specifying the name of the IDL or system font to use for the text string. The default value is "DejaVuSans". The following font names are available on all systems: "Helvetica" "Times" "Courier" "Symbol" "Monospace Symbol" "DejaVuSans" "DejaVuSymbol" "Hershey n", where n is the Hershey font number On Windows platforms, you may also use any other fonts installed on your system. See Using TrueType Fonts for a list of available characters in the above fonts. FONT_SIZE Set this property to an integer or floating-point value giving the font size in points. The default is 9. FONT_STYLE An integer or string specifying the font style to be used for the text string. Allowed values are: Integer String Resulting Style 0 "Normal" or "rm" Default (roman) 1 "Bold" or "bf" Bold 2 "Italic" or "it" Italic 3 "Bold italic" or "bi" Bold italic HIDE Set this property to 1 to hide the graphic. Set HIDE to 0 to show the graphic. NAME A string that specifies the name of the graphic. The name can be used to retrieve the graphic using the brackets array notation. If NAME is not set then a default name is chosen based on the graphic type. ONGLASS Set this property to 1 to display the text on a plane facing the viewer. Set this property to 0 to display the text using the full 3-D orientation of the text object and its parents. The default is ONGLASS=1 for text in a 2-D data space with a non-zero ORIENTATION. The default is ONGLASS=0 for text with ORIENTATION=0, or for text in a 3-D data space. Note: If ONGLASS is set to 1, then IDL will automatically set the CLIP property to 0, to avoid clipping the text. ORIENTATION Set this property to the angle of the text annotation. The angle rotates counterclockwise from the positive x axis. If not supplied, the default value 0 is used. POSITION Set this property to a two-element vector that determines the position of the graphic within the window. Coordinates are expressed in normalized units ranging from 0.0 to 1.0. On creation, if the DEVICE keyword is set, the units are given in device units (pixels). STRING After the text annotation is created, set this property to a string or an array of strings to change the displayed text. If STRING is an array then each string will be placed on a separate line. TRANSPARENCY An integer between 0 and 100 that specifies the percent transparency of the text. The default value is 0. UPDIR A two- or three-element floating-point vector describing the vertical direction for the string. The upward direction is the direction defined by a vector pointing from the origin to the point specified. Use this property in conjunction with BASELINE to specify the plane on which the text lies (the direction specified by UPDIR should be orthogonal to the direction specified by BASELINE). For example, to have a string to lie in the Y-Z plane pointing downwards and towards you, define BASELINE to be [0, 0, 1.0] (the +Z direction), and UPDIR to be [0, -1.0, 0] (the -Y direction). UVALUE Set this property to an IDL variable of any data type. VERTICAL_ALIGNMENT A floating-point value between 0.0 and 1.0 that indicates the vertical alignment of the text baseline. An alignment of 0.0 (the default) bottom-justifies the text at the given position; an alignment of 1.0 top-justifies the text. WINDOW (Get Only) This property retrieves a reference to the WINDOW object which contains the graphic. Adding Mathematical Symbols and Greek Letters to the Text String You can add mathematical symbols and Greek letters to the text string using a TeX-like syntax. To turn on math symbols for a text string (or a portion of a string), you need to surround that region with a pair of "$" characters. The following tables list the letters and symbols you can embed within a text string. Greek Letters Binary Operators Relations and Arrows Math Miscellaneous Other Symbols Accents Superscripting and Subscripting Within the $ ... $ sections, use an underscore "_" character to create a subscript, and a caret "^" to create a superscript. If the underscore or caret is followed by a single character, only that character will be subscripted or superscripted. Optionally, you can use curly braces { } to subscript or superscript an entire group of characters. You can also nest superscripts and subscripts (two levels only). For example: Changing Fonts and Special Characters Use the following commands to change fonts or add spacing within the $ ... $ sections of the string: \bf Helvetica bold \it Helvetica italic \bi Helvetica bold italic \rm Default font \t Add a tab (4 spaces) \n Add a carriage return Miscellaneous Unicode Symbols You can also use "\U( u 0 , u 1 ,...,u n)" to insert arbitrary Unicode characters from the DejaVuSans font. Each u i within the parentheses will be interpreted as a 16-bit hexadecimal Unicode value. For example: w = WINDOW() t = TEXT(0.01, 0.05, "$\U(266A,266C) \Pluto's really small \U(266B)$") The SHOWFONT procedure may be used to display the Unicode characters from the DejaVuSans font. Note: To include Unicode characters from a font other than DejaVuSans, use the FONT_NAME property to change the font, and then use the "!Z" embedded formatting command instead of "\U". Note: To include Unicode characters in an EPS or PDF file, use the BITMAP keyword to the Save method. Otherwise the Unicode characters may display incorrectly. Tips Math symbols, superscripts, subscripts, and the font commands are only active within the $ ... $ sections of your string. To include a normal "$" character within your string, you should escape it by inserting a backslash in front, like this: "\$". If a text formatting command is followed by a single space character, that space is ignored. For example, in the string "$A\times B$", the resulting string is "AxB", with no space. Any additional spaces following the symbol will be included in the output. A fixed-width space may be inserted using "\ " (backslash+space). The "\sqrt" square root symbol is wide enough to accommodate one character underneath the root. You can use curly braces { } to include a group of characters. For example, you could do "$\sqrt{b^2 - 4ac}$". The "\overline" symbol is wide enough to accommodate one character underneath the line. You can use curly braces { } to include a group of characters. For example, you could do "$1/7 = 0.\overline{142857}$". When combining subscripts and superscripts, to ensure that the spacing is correct, it is best to put the longest group of characters last. For example, instead of: "$A_{ijk}^{2}B$" You should compose the string this way: "$A^{2}_{ijk}B$" Within the $ ... $ sections, the characters \, {, }, ^, and _ are all special control characters. To insert one of these characters into your string, use the backslash as an escape character. For example: "\\", "\{", "\}", "\^", "\_" Version History 8.0 Introduced 8.1 Added the UVALUE property and the Delete method. 8.2 Added \dagger, \ddagger, \emptyset, \permil, \primeprime, and the Unicode symbols, including the planets. Added Unicode \U support. Added \sqrt{ } support. 8.2.2 Added POSITION property. 8.2.3 Added \overline{ }. 8.3 Added ONGLASS property; added \| and \perp symbols. 8.4 Added \t and \n commands. 8.5 Added \star symbol. 8.6 Changed default font to DejaVuSans. Added new symbols. 8.6.1 Changed default math font to DejaVuSymbol. See Also !COLOR, PLOT, Using IDL graphics, Using TrueType Fonts
Structure of a Stock Trading Strategy As I understand it, Quantopian's objectives can all be expressed in broad lines of thought. Their prime objective is to maximize their multi-strategy portfolio(s). I view it as a short-term operation like trying to predict some alpha over the short-term (a few weeks to a few months) where long-term visibility is greatly reduced. As if adopting a "we will see what turns out attitude" with a high probability of long-term uncertainty. I would prefer to view their optimization problem first as a long-term endeavor where their portfolio of strategies will have to contend with the Law of diminishing returns (alpha decay) that they like it or not. And this, that their strategies compensate for this alpha decay or not. It is a matter of finding whatever trading techniques are needed or could be found to sustain the exponential growth of their "anticipated" growing portfolio. A trading portfolio can be expressed by the outcome of its payoff matrix:$\;$ Total Profits $\,$ = $\displaystyle{\int_{t=0}^{t=T}H(t) \cdot dP}$ The integral of this payoff matrix gives the total profit generated over the trading interval (up to terminal time T) whatever its trading methods and whatever its size. The formula will give the proper answer no matter the number of stocks considered and over whatever trading interval no matter how long it might be (read over years and years even if the trading itself might be done daily, weekly, minutely, or whatever). The strategy $H_{mine}$ becomes the major concern since $\Delta P$ is not something you can control, it is just part of the historical record. However, $H_{mine}$, will fix the price at which all the trades are recorded. All those trading prices becoming part of the recorded price matrix $P$. We can identify any strategy as $H_{k}$ for $k \subset {1, \dots, k} $. And if you want to treat multiple strategies at the same time, you can use the first equation as a 3-dimensional array where $H_{k}$ is the first axis. Knowing the state of this 3-dimensional payoff matrix is easy: all entry are time-stamped and identified by $h_{k,d,j}$ thereby giving the quantity held in each traded stock $j$ within each strategy $k$ at any time $t$. How much did a strategy $H_{k}$ contribute to the overall portfolio is also easy to answer: $\quad \quad \displaystyle{w_k = \frac{\int_{0}^{T} H_{k} \cdot dP}{ \sum_1^k \int_{t=0}^{t=T}H_k \cdot dP}}$ Portfolio holdings are time functions which can be evaluated at any time. The same goes for strategy weights $w_{k}$. Nothing in there says that $w_{k}$ will be positive. For instance, according to Quantopian's contest procedures, a negatively performing strategy ($\int_{0}^{T} H_{k} \cdot dP < 0 $) is simply eliminated. Understandably, each strategy $H_{k}$ can be unique or a variation on whatever theme. You can force your trading strategy to be whatever you want within the limits of the possible, evidently. But, nonetheless, whatever you want your trading strategy to do, you can make it do it. And that is where your strategy design skills need to shine. Quantopian can re-order the strategy weights $w_{k}$ by re-weighing them on whatever criteria they like, just as in the contest with their scoring mechanism and declare these new weights as some alpha generation "factor" with $\sum_1^k b_k \cdot w_{k}$. And this will hold within their positive strategies contest rules: $ \forall \, w_k > 0$. Again, under the restriction of $\, w_k > 0$, they could add leveraging scalers based on other criteria and still have an operational multi-strategy portfolio: $\sum_1^k l_k \cdot b_k \cdot w_{k}$. The leveraging might have more impact if ordered by their expected weighing and leveraging mechanism: $\; \mathsf{E} \left [ l_k \cdot b_k \cdot w_{k} \right ] \succ l_{k-1} \cdot b_{k-1} \cdot w_{k-1} $. But, this might require that their own weighing factors $\, b_k $ offer some predictability. But, I have no data on their weighing mechanism. Naturally, any strategy $H_{k}$ can use as many internal factors as it wants or needs. It does not change the overall objective which is having $\, w_k > 0$ to be considered not only in the contest but to have it high enough in the rankings to be considered for an allocation. Quantopian can add any criteria it wants to its list including operational restrictions like market-neutrality or whatever. These become added conditions that strategy $H_{k}$ needs to comply with, otherwise, again it might not be considered for an allocation. The allocation is the real prize. The contest reward tokens should be viewed as such, a small for "waiting reward" for the best 10 strategies in the rankings: $ H_{k=1, \dots, 10}\,$ out of the $ H_{k=1 \, , \dots, \, \approx 300}$ participating. Your Trading Strategy Or Mine From what preceded, all the attention should be put on strategy $H_{mine}$ or $H_{yours}$ depending. I will use its generic format $H_k$ for whatever it might be in the gazillions of choices. The task is to design a trading strategy that will exceed average long-term market expectation and then some as an added reward for all the work done and the skills brought to the game. What should be the nature of this trading strategy since it can be anything we want? The ultimate goal is still to have strategy $H_k$ outperform its benchmark by a wide margin: $\int_0^T H_k \cdot dP \gg \int_0^T H_{spy} \cdot dP $. It will not be instantaneous, evidently. Building a portfolio is really a long-term endeavor. One thing you do not want to see is "crash and burn". Delegating Trading To A Black Box A major constraining strategy design element might be the requirement of using an optimizer to do the trading. That it be Quantopian's Optimizer API (used in Quantopian's contest) or the CVXOPT optimizer (the one I used), both delegate the trading activity to a "black box" where you have no control of what is going on inside. It is the optimizer that will determine how many shares will be traded in which stock at what price and at what time. The more constraints we put on this black box, the more it will fail to produce higher returns. As if saying that because we are using an optimizer, our trading strategies might produce lower returns over the long haul as if by default, or more appropriately, viewing it as: it is all it could do. A trading strategy has a structure, even if it is fuzzy and chaotic. However, its general behavior can be averaged out when you have a high enough number of trades. You can say things like: on average... What Can an Optimizer See? Either cited optimizers will only detect what they can see. Should there be no trends (short or mid-term), the optimizer will answer with a flat zero. If the data is totally random, it will answer with zero. And, that is not a way to generate profits, no matter what anyone might say. Also, the optimizer will not see beyond the data's rolling lookback period. That mathematical contraption cannot extract blood from a stone or a flat line, no matter how much you try, neither can I. It is only if you can supply them with trending data that these optimizers can see and do something. Nonetheless, the two optimizers, especially when target weights are used, can be shoved or pushed around. It can be done by feeding them a special diet of your concoction, with these altered weights you can force them to behave differently. In a way, forcing in your own objectives, your own agenda, even in an uncertain stochastic and chaotic trading environment. Forcing The Optimizer For instance, one of my high-flying strategies highlighted in my latest book ( : https://www.amazon.com/dp/B07R4Q7SZF) used the CVXOPT optimizer for its trading. The strategy demonstrated returns way above market averages, not only over a couple of years but over a 14-year period. The outcome was not a random occurrence or some luck factor. The strategy was simply feeding the optimizer with "prepackaged" weights. I used the word simply because it was exactly that. An overview of this can be seen in the following forum: https://www.quantopian.com/posts/reengineering-for-more. Reengineering Your Stock Portfolio There were no real factors per se in the above trading strategy, but price was at the center of it all. I wanted my equations to follow the general directive: $\,$ Total Profits$\,$ = $\int_0^T H_k \cdot (1+g(t))^{t} \cdot dP$. This implied that the bet size would grow exponentially. And, as a "side effect", it would compensate for the alpha decay due to the Law of diminishing returns. In fact, I made the strategy overcompensate which gave it its exponential equity curve. Part of this is illustrated graphically in the following post: https://www.quantopian.com/posts/quality-factors-composite-feedback-requested-please#5d67dad066ea457e47eb5342. What was "required" was finding some "excuse" that would trigger more trades more often and to increase the average net profit per trade as the portfolio grew in size. Both these task were relatively easy. It was all part of the inner workings of the above equation. The trading profits were continuously re-invested to generate even more profits creating this self-funding positive feedback loop. The NET Average Profit Thing Your trading strategy $H_k$, whatever it is, will have an average net profit per trade at termination time. This is illustrated in the following chart (from one of my books) as a normal distribution (blue line) with its average return $\mu$. In reality, it is not a normal distribution (it will have fat tails, high kurtosis, and skewness) but for illustrative purposes, it is close enough. The task is to move the whole distribution to the right as a block by changing its center of mass, and at the same time give it a higher density. This means having the trading strategy do more trades with a higher average net profit per trade. Gradually moving the center of mass to the right will compensate for return degradation. The farther right you move the distribution the better, evidently within all the portfolio constraints. As a side note, by my estimates, the Law of diminishing returns will slowly catch up and kick back in after some 20 to 25 years. It gives me ample time to figure out ways to give the strategy more upward momentum. It is not that you are predicting where the market is going (except maybe in a general sense), it is predicting what your trading behavior to market changes will be. And putting your strategy on steroids: $\,$ Total Profits$\,$ = $\;(1+g(t))^t \cdot \int_0^T H_k \cdot dP\;$ will do just that. Expressed in this fashion, it makes it explicit that you are the one providing the upward thrust to your existing trading strategy by making it trade more for a higher average profit. Slowly at first, but increasing the pressure over time. The scaling function: $(1+g(t))^t$ can be internally or externally controlled. For instance, setting $g(t) = 0$ will eliminate its influence. You want to outperform your peers, then you will definitely have to do more than they do. It is not by doing the same thing or some variant thereof that you will do better. Note that all this might require that you view this long-term trading problem with a different mindset. Trading on an Excuse In Reengineering For More (https://www.quantopian.com/posts/reengineering-for-more), the strategy used the AverageDollarVolume over the past 4 months as a factor. The rationale appears simple enough: if a stock trades a lot with a high AverageDollarVolume, then it is liquid and most probably part of the higher capitalization stocks. However, the AverageDollarVolume has very little predictive powers. First, it was at least 2 months out of date, meaning that trades would be taken based on what was a baseline average 2 months prior. Under these circumstances, the AverageDollarVolume should be considered as almost a random number. It goes like this: how can the AverageDollarVolume of 2 months ago able to tell you what the price of a stock will be tomorrow, next week, or next month for that matter? The last expression above has moved the responsibility of enhancing your trading strategy directly into your hands. It is also giving you some control over your trading strategy $H_k$. I use pressure points to enhance performance, but I think there are many other techniques available to do an equivalent or better job. Should you go that route too? I am not the one to answer that question. Regardless, we all have to make choices.
Features Introduction CodiMD is a real-time, multi-platform collaborative markdown note editor.This means that you can write notes with other people on your desktop, tablet or even on the phone.You can sign-in via multiple auth providers like GitHub and many more on the homepage. Workspace Modes Desktop & Tablet Edit: See only the editor. View: See only the result. Both: See both in split view. Mobile View: See only the result. Edit: See only the editor. Night Mode: When you are tired of a white screen and like a night mode, click on the little moon and turn on the night view of CodiMD. The editor view, which is in night mode by default, can also be toggled between night and day view using the the little sun. Image Upload: You can upload an image simply by clicking on the camera button .Alternatively, you can drag-n-drop an image into the editor. Even pasting images is possible!This will automatically upload the image to imgur, Amazon S3, Minio or local filesystem, nothing to worry about. :tada: Share Notes: If you want to share an editable note, just copy the URL.If you want to share a read-only note, simply press publish button and copy the URL. Save a Note: Currently, you can save to Dropbox or save an .md file locally. Import Notes: Similarly to the save feature, you can also import an .md file from Dropbox ,or import content from your clipboard , and that can parse some html which might be useful :smiley: Permissions: It is possible to change the access permission to a note through the little button on the top right of the view. There are four possible options: Owner read/write Signed-in read Signed-in write Guest read Guest write Freely ✔ ✔ ✔ ✔ ✔ Editable ✔ ✔ ✔ ✔ ✖ Limited ✔ ✔ ✔ ✖ ✖ Locked ✔ ✔ ✖ ✔ ✖ Protected ✔ ✔ ✖ ✖ ✖ Private ✔ ✖ ✖ ✖ ✖ Only the owner of the note can change the note's permissions. Embed a Note: Notes can be embedded as follows: <iframe width="100%" height="500" src="https://hackmd.io/features" frameborder="0"></iframe> You can use a special syntax to organize your note into slides.After that, you can use the Slide Mode to make a presentation.Visit the above link for details. View Table of Contents: You can look at the bottom right section of the view area, there is a ToC button .Pressing that button will show you a current Table of Contents, and will highlight which section you're at.ToCs support up to three header levels. Permalink Every header will automatically add a permalink on the right side. You can hover and click to anchor on it. Edit: Shortcut Keys: Just like Sublime text, which is pretty quick and convenient. For more infomation, see here. Auto-Complete: This editor provides full auto-complete hints in markdown.- Emojis: type : to show hints.- Code blocks: type ` and plus a character to show hint. <i hidden> </i>- Headers: type# to show hint.- Referrals: type[] to show hint.- Externals: type{} to show hint.- Images: type!` to show hint. Title: This will take the first level 1 header as the note title. Tags: Using tags as follows, the specified tags will show in your history. tags: features cool updated You can provide advanced note information to set the browser behavior (visit above link for details): - robots: set web robots meta - lang: set browser language - dir: set text direction - breaks: set to use line breaks - GA: set to use Google Analytics - disqus: set to use Disqus - slideOptions: setup slide mode options ToC: Use the syntax [TOC] to embed table of content into your note. [TOC] Emoji You can type any emoji like this :smile: :smiley: :cry: :wink: See full emoji list here. ToDo List: [ ] ToDos [x] Buy some salad [ ] Brush teeth [x] Drink some water Code Block: We support many programming languages, use the auto complete function to see the entire list. javascript=var s = "JavaScript syntax highlighting";alert(s);function $initHighlight(block, cls) { try { if (cls.search(/\bno\-highlight\b/) != -1) return process(block, true, 0x0F) + ' class=""'; } catch (e) { /* handle exception */ } for (var i = 0 / 2; i < classes.length; i++) { if (checkCondition(classes[i]) === undefined) return /\d+[\s/]/g; }} If you want line numbers, type =after specifying the code block languagues. Also, you can specify the start line number. Like below, the line number starts from 101: javascript=101 var s = "JavaScript syntax highlighting"; alert(s); function $initHighlight(block, cls) { try { if (cls.search(/\bno\-highlight\b/) != -1) return process(block, true, 0x0F) + ' class=""'; } catch (e) { /* handle exception */ } for (var i = 0 / 2; i < classes.length; i++) { if (checkCondition(classes[i]) === undefined) return /\d+[\s/]/g; } } Or you might want to continue the previous code block's line number, use =+ var s = "JavaScript syntax highlighting";alert(s); Somtimes you have a super long text without breaks. It's time to use !to wrap your code. When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back. Blockquote Tags: Using the syntax below to specifiy your name, time and colorto vary the blockquotes. [name=ChengHan Wu] [time=Sun, Jun 28, 2015 9:59 PM] [color=#907bf7] Even support the nest blockquotes! [name=ChengHan Wu] [time=Sun, Jun 28, 2015 10:00 PM] [color=red] Externals YouTube {%youtube 1G4isv_Fylg %} Vimeo {%vimeo 124148255 %} Gist {%gist schacon/4277%} SlideShare {%slideshare briansolis/26-disruptive-technology-trends-2016-2018-56796196 %} Speakerdeck {%speakerdeck sugarenia/xxlcss-how-to-scale-css-and-keep-your-sanity %} Caution: this might be blocked by your browser if not using an https URL.{%pdf https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf %} MathJax You can render LaTeX mathematical expressions using MathJax, as on math.stackexchange.com: The Gamma function satisfying $\Gamma(n) = (n-1)!\quad\forall n\in\mathbb N$ is via the Euler integral $$ x = {-b \pm \sqrt{b^2-4ac} \over 2a}. $$ $$ \Gamma(z) = \int_0^\infty t^{z-1}e^{-t}dt\,. $$ More information about LaTeXmathematical expressions here. UML Diagrams Sequence Diagrams You can render sequence diagrams like this: Alice->Bob: Hello Bob, how are you?Note right of Bob: Bob thinksBob-->Alice: I am good thanks!Note left of Alice: Alice respondsAlice->Bob: Where have you been? Flow Charts Flow charts can be specified like this: ```flow st=>start: Start e=>end: End op=>operation: My Operation op2=>operation: lalala cond=>condition: Yes or No? st->op->op2->cond cond(yes)->e cond(no)->op2 ``` Graphviz digraph hierarchy { nodesep=1.0 // increases the separation between nodes node [color=Red,fontname=Courier,shape=box] //All nodes will this shape and colour edge [color=Blue, style=dashed] //All the lines look like this Headteacher->{Deputy1 Deputy2 BusinessManager} Deputy1->{Teacher1 Teacher2} BusinessManager->ITManager {rank=same;ITManager Teacher1 Teacher2} // Put them on the same level} Mermaid gantt title A Gantt Diagram section Section A task :a1, 2014-01-01, 30d Another task :after a1 , 20d section Another Task in sec :2014-01-12 , 12d anther task : 24d Abc X:1T:Speed the PloughM:4/4C:Trad.K:G|:GABc dedB|dedB dedB|c2ec B2dB|c2A2 A2BA|GABc dedB|dedB dedB|c2ec B2dB|A2F2 G4:||:g2gf gdBd|g2f2 e2d2|c2ec B2dB|c2A2 A2df|g2gf g2Bd|g2f2 e2d2|c2ec B2dB|A2F2 G4:| Alert Area :::success Yes :tada: ::: :::info This is a message :mega: ::: :::warning Watch out :zap: ::: :::danger Oh No! :fire: ::: Typography Headers # h1 Heading## h2 Heading### h3 Heading#### h4 Heading##### h5 Heading###### h6 Heading Horizontal Rules Typographic Replacements Enable typographer option to see result. (c) (C) (r) (R) (tm) (TM) (p) (P) +- test.. test... test..... test?..... test!.... !!!!!! ???? ,, Remarkable -- awesome "Smartypants, double quotes" 'Smartypants, single quotes' Emphasis This is bold text This is bold text This is italic text This is italic text Deleted text lu~lala~ Superscript: 19^th^ Subscript: H~2~O ++Inserted text++ ==Marked text== Blockquotes Blockquotes can also be nested... ...by using additional greater-than signs right next to each other... ...or with spaces between arrows. Lists Unordered Create a list by starting a line with +, -, or * Sub-lists are made by indenting 2 spaces: Marker character change forces new list start: Ac tristique libero volutpat at Facilisis in pretium nisl aliquet Nulla volutpat aliquam velit Very easy! Ordered Lorem ipsum dolor sit amet Consectetur adipiscing elit Integer molestie lorem at massa You can use sequential numbers... ...or keep all the numbers as 1. feafw 332 242 2552 e2 Start numbering with offset: foo bar Code Inline code Indented code // Some commentsline 1 of codeline 2 of codeline 3 of code Block code "fences" Sample text here... Syntax highlighting var foo = function (bar) { return bar++;};console.log(foo(5)); Tables Option Description data path to data files to supply the data that will be passed into templates. engine engine to be used for processing templates. Handlebars is the default. ext extension to be used for dest files. Right aligned columns Option Description data path to data files to supply the data that will be passed into templates. engine engine to be used for processing templates. Handlebars is the default. ext extension to be used for dest files. Left aligned columns Option Description data path to data files to supply the data that will be passed into templates. engine engine to be used for processing templates. Handlebars is the default. ext extension to be used for dest files. Center aligned columns Option Description data path to data files to supply the data that will be passed into templates. engine engine to be used for processing templates. Handlebars is the default. ext extension to be used for dest files. Links Images Like links, Images also have a footnote style syntax With a reference later in the document defining the URL location: Show the image with given size Footnotes Definition Lists Term 1 : Definition 1 with lazy continuation. Term 2 with inline markup : Definition 2 { some code, part of Definition 2 }Third paragraph of definition 2. Compact style: Term 1 ~ Definition 1 Term 2 ~ Definition 2a ~ Definition 2b Abbreviations This is an HTML abbreviation example. It converts "HTML", but keeps intact partial entries like "xxxHTMLyyy" and so on. *[HTML]: Hyper Text Markup Language
Answer $\mu = \frac{v^2}{gr} - tan(tan^{-1}(\frac{v_0^2}{rg}))$ Work Step by Step We find the angle of the road based on the maximum posted speed. Thus, we obtain: $\theta = tan^{-1}(\frac{v_0^2}{rg})$ We also know: $ mgcos\theta \mu+mgsin\theta=\frac{mv^2}{r}$ Thus, we find an expression for $\mu$: $\mu=\frac{\frac{v^2}{r}-gsin\theta}{gcos\theta}$ $\mu = \frac{v^2}{gr} - tan\theta$ Plugging in the value of theta gives: $\mu = \frac{v^2}{gr} - tan(tan^{-1}(\frac{v_0^2}{rg}))$
Based on such demands, I designed a co-pilot which can: Automatically clustering the clusters; Remove periodic boundary conditions and make the center-of-mass at $(0,0,0)^T$; Adjust the view vector along the minor axis of the aggregate; Classify the aggregate. After obtaining clusters in step 1, we must remove periodic boundary conditions of the cluster. If in step 1, one uses BFS or DFS + Linked Cell List method, then one can remove periodic boundary condition during clustering; but this method has limitations, it does not work properly if the cluster is percolated throughout the box. Therefore, in this step, I use circular statistics to deal with the clusters. In periodic boundary condition simulation box, distance between an NP in an aggregate and the midpoint will never exceed $L/2$ in corresponding box dimension. Midpoint here is not center-of-mass, e.g., distance between a nozzle point and center-of-mass of an ear-syringe-bulb is clearly larger than its half length; midpoint is a actually a "de-duplicated" center-of-mass. Besides, circular mean also puts most points in the center in case of percolation. Therefore, in part 2, we have following steps: Choose a $r_\text{cut}$ to test whether the aggregate is percolate; If the aggregate is percolate, evaluate the circular mean of all points $r_c$; Set all coordinates $r$ as $r\to pbc(r-r_c)$; If the aggregate is not percolate, midpoint is evaluated by calculating circular mean of coordinates $r$ where $\rho(r)>0$, $\rho(r)$ is calculated using bin size that smaller than $r_\text{cut}$ used in step 1; Same as step 3, update coordinates; After step 5, the aggregates are unwrapped from the box, set $r\to r-\overline{r}$ to set center-of-mass at $(0,0,0)^T$ $$\alpha=\underset{\beta}{\operatorname{argmin}}\sum_i (1-\cos(\alpha_i-\beta))$$ Adjusting the view vector is simple, evaluate the eigenspace of gyration tensor as $rr^T/n$ and sort the eigenvectors by eigenvalue, i.e., $\lambda_1\ge\lambda_2\ge\lambda_3$, then the minor axis is corresponding eigenvector $v_3$, the aggregate then can be rotated by $[v_1, v_2, v_3]$ so that the minor axis is $z$-axis. The last step is a bit more tricky, the best trail I attempted was to use SVC, a binary classification method. I used about 20 samples labeled as "desired", these 20 samples were extended to 100 samples by adding some noises to the samples, e.g., moving coordinates a little bit, adding several NPs into the aggregate or removing several NPs randomly, without "breaking" the category of the morphology. Together with 100 "undesired" samples, I trained the SVC with a Gaussian kernel. The result turned out to be pretty good. I also tried to use ANN to classify all 5 categories of morphologies obtained from simulations, but ANN model did not work very well, perhaps the reason was lack of samples or the model I built was too rough. I didn't try other multi-class methods, anyway, that part of work was done, I stopped developing this co-pilot long time ago.
Generally, all single bonds are said to permit free rotation while all double bonds are said to inhibit free rotation. This can actually be explained numerically as well: the rotation around the $\ce{O-O}$ bond in $\ce{H2O2}$ has an approximate rate constant of $2.0 \times 10^{8}~\mathrm{s^{-1}}$ while the rotation around the $\ce{C=C}$ bond in ethene has $1.3 \times 10^{-31}~\mathrm{s^{-1}}$. This translates to an equilibration half-life of ethene which is greater than the age of the universe. It is, of course, helpful to understand why such a large barrier exists. This is due to the way double bonds are formed. ‘Normal’ single bonds are σ-symmetric overlaps of orbitals, i.e. they are necessarily totally symmetric with respect to rotation of the bond axis. Thus, any mechanisms that inhibit single bond rotations must derive from outside these bonds. The ‘second’ bon, however, is a π-symmetric bond, meaning that a plane of symmetry exists which is parallel to the bond axis (i.e. the bond axis is in said plane of symmetry). This automatically means that the bond is everything but symmetric with respect to rotation around the bond axis; if the p-orbitals do not align parallelly the overlap gets smaller and the bond starts to get broken. Note that triple bonds, which contain two perpendicular π bonds, are freely rotatable again since what is lost in one direction (e.g. vertical) is mathematically strictly gained in the other (e.g. horizontal). You can see the three types of bonds in the figure below. Figure 1: Single, double and triple $\ce{C-C}$ bonds. The single bond is formed by σ overlap of $\mathrm{sp}^n$ hybrid orbitals, double and triple bonds by π-symmetric p-orbital overlap. It may be immediately obvious that the overlap of a σ bond is generally much better than that of a π bond. Hence, the difference in orbital energies between σ and σ* is much greater than that between π and π*: π orbitals are less stabilised than σ orbitals. In turn, that means that the HOMO will generally be a π orbital (least stabilisation), while the LUMO will typically be a π* orbital (least destabilisation). Thus, the energy difference between π and π* is the smallest energy difference between an occupied and an unoccupied orbital. If a photon with the appropriate wavelength comes along, it can excite the π bonding electron into the antibonding π* orbital. (It also needs to come along from the appropriate direction — parallel to the $\ce{C=C}$ bond axis in ethene if I did my group theory correctly, but that is a minor issue.) If we excite $1\unicode[Times]{x3c0}^2\,2\unicode[Times]{x3c0}^0 \ce{->} 1\unicode[Times]{x3c0}^1\,2\unicode[Times]{x3c0}^1$, we have effectively reduced the bond order from $1$ to $0$, creating a ‘no bond’ where there used to be a ‘second single’ bond. If there is no overall bonding interaction, the rotation restrictions no longer apply. As long as one electron is excited into π*, the molecule is freely rotatable. When the electron finally relaxes, chances are that the geometry of the double bond has inverted. You can’t observe that in (symmetric) ethene, but other systems such as stilbene or hemithioindigo have been studied extensively. If you don’t wish to invoke molecular orbital theory, you can also explain the entire idea with Lewis structures. Under the Lewis formalism, an incoming photon ($h\cdot \nu$) will cleave the $\ce{C=C}$ double bond while leaving the underlying $\ce{C-C}$ single bond intact. It can be written as: $$\ce{-C=C- ->[$h\cdot \nu$] -C^.-C^.-{}}$$ Here too, you are making a double bond a single bond. The former cannot rotate, the latter is freely rotatable. Upon relaxation, whichever rotamere the molecule was in will be fixed again. Since the trans rotamer is more stable then the cis rotamer in almost all linear cases, that one will be accessed by irradiation. Tl;dr: Yes, double bonds are not freely rotatable, but by irradiation you are breaking the double bond into a formal single bond diradicalic structure.
a b c Y 0 0 0 0 0 0 1 1 0 1 0 0 0 1 1 1 1 0 0 1 1 0 1 1 1 1 1 0 1 1 0 0 How would you put this into a logical expression that shows Y's behaviour? Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community a b c Y 0 0 0 0 0 0 1 1 0 1 0 0 0 1 1 1 1 0 0 1 1 0 1 1 1 1 1 0 1 1 0 0 How would you put this into a logical expression that shows Y's behaviour? Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. Giving an example of what Eugene has said, but letting you do your own homework. Let's look at another Truth Table: $ \begin{array}{|c|c|c|c|} \hline a& b & c & \varphi \\ \hline 1 & 1& 1& 1\\ \hline 1 & 1 & 0 & 1 \\ \hline 1 & 0 & 1 & 0 \\ \hline 1 & 0 & 0 & 1 \\ \hline 0 & 1 & 1 & 0 \\ \hline 0 & 1 & 0 & 1\\ \hline 0 & 0 & 1 & 0 \\ \hline 0 & 0 & 0 & 1 \\ \hline \end{array} $ To put it on DNF just look at the rows which evaluate to True. $ \begin{array}{|c|c|c|c|c|} \hline a& b & c & \varphi & \\ \hline 1 & 1& 1& 1 & a \wedge b \wedge c \\ \hline 1 & 1 & 0 & 1& a \wedge b \wedge \lnot c \\ \hline 1 & 0 & 0 & 1& a \wedge \lnot b \wedge \lnot c \\ \hline 0 & 1 & 0 & 1& \lnot a \wedge b \wedge \lnot c \\ \hline 0 & 0 & 0 & 1& \lnot a \wedge \lnot b \wedge \lnot c \\ \hline \end{array} $ So the DNF of $\varphi$ is : $$ (a \wedge b \wedge c) \vee (a \wedge b \wedge \lnot c) \vee (a \wedge \lnot b \wedge \lnot c) \vee (\lnot a \wedge b \wedge \lnot c) \vee (\lnot a \wedge \lnot b \wedge \lnot c) $$ To put it on CNF just look at the rows which evaluate to False. Now just negate those rows to get True. $ \begin{array}{|c|c|c|c|c|c|c|c|} \hline a& b & c & \varphi & & & \lnot \varphi & \\ \hline 1 & 0 & 1 & 0 & a \wedge \lnot b \wedge c & & 1 & \lnot a \vee b \vee \lnot c\\ \hline 0 & 1 & 1 & 0 & \lnot a \wedge b \wedge c & & 1 & a \vee \lnot b \vee \lnot c\\ \hline 0 & 0 & 1 & 0 & \lnot a \wedge \lnot b \wedge c & & 1 & a \vee b \vee \lnot c\\ \hline \end{array} $ So the CNF of $\varphi$ is: $$ (\lnot a \vee b \vee \lnot c) \wedge (a \vee \lnot b \vee \lnot c) \wedge (a \vee b \vee \lnot c) $$ You can write either CNF or DNF from the truth table directly and then transform it to any form you like opening brackets and possibly using absorption rule. Y as: |&A!B&C!A If you can't understand it, then here is a more readable version: Y = (((A) AND (NOT B)) OR ((C) AND (NOT A))) If either: Then Y = 1. Else, Y = 0.
Hi good people!, How do I go about finding the general term for the following sequence: \({1\over3};{8\over9};1;{64\over81}\) I have tested the sequence for Arithmetic, Geometric and Quadratic...but cannot determine which...please help... Try the following: \(1^3\frac{1}{3}; 2^3\frac{1}{3}^2;3^3\frac{1}{3}^3;4^3\frac{1}{3}^4;...etc\). Hi good people!, How do I go about finding the general term for the following sequence: {1\over3};{8\over9};1;{64\over81} \(\displaystyle {1\over3};{8\over9};1;{64\over81}\) \(\huge{ \begin{array}{|rcll|} \hline a_1 &=& {1\over3} \\\\ a_2 &=& {8\over9} \\\\ a_3 &=& 1 \\\\ a_4 &=& {64\over81} \\\\ \ldots \\\\ a_n &=& \dfrac{n^3}{3^n} \\ \hline \end{array} }\) Hi Heureka, I have no idea why the answer is what you say it is...How you get to it....thanx anyways... Hi Heureka, I have no idea why the answer is what you say it is...How you get to it....thanx anyways... I have no idea why the answer is what you say it is...How you get to it....thanx anyways... \(\huge{ \begin{array}{|lrlrlrlrlrl|} \hline & a_{\color{red}1} &;& a_{\color{red}2}&;& a_{\color{red}3} &;& a_{\color{red}4} &;& \ldots &;& a_{\color{red}n} \\ \hline & \dfrac{1}{3}&;& \dfrac{8}{9}&;& 1&;& \dfrac{64}{81}&;& \ldots &;& \\\\ \Rightarrow & \dfrac{1^1}{3^1}&;& \dfrac{2^3}{3^2}&;& 1&;& \dfrac{4^3}{3^4}&;& \ldots &;& \\\\ \Rightarrow & \dfrac{{\color{red}1}^1}{3^{\color{red}1}}&;& \dfrac{{\color{red}2}^3}{3^{\color{red}2}}&; & \dfrac{{\color{red}3}^3}{3^{\color{red}3}}&;& \dfrac{{\color{red}4}^3}{3^{\color{red}4}}&;& \ldots &;&\dfrac{{\color{red}n}^3}{3^{\color{red}n}} \\ \hline \end{array} }\)
Let $(E,D)$ be a probabilistic encryption scheme with $n$-length keys (given a key $k$, we denote the corresponding encryption function by $E_k$) and $n+10$-length messages. Then, show that there exist two messages $x_0, x_1 \in \{0,1\}^{n+10}$ and a function $A$ such that $$\mathrm{Pr}_{b \in \{0,1\}, k \in \{0,1\}^n}[A(E_k(x_b)) = b ] \geq \frac{9}{10}.$$ (This is problem 9.4 from Arora/Barak Computational Complexity.) My gut intuition says that the same idea from the proof in the deterministic case should carry over. WLOG let $x_0 = 0^{n+10}$, and denote by $S$ the support of $E_{U_n}(0^{n+10})$. We will take $A$ to output $0$ if the input is in $S$. Then, assuming the condition stated in the problem fails to hold for all $x \in \{0,1\}^{n+10}$, we conclude that $\mathrm{Pr}[E_{U_n}(x) \in S] \geq 2/10$ for all $x$. This implies that there exists some key so that $E_k$ maps at least $2/10$ of the $x$ into $S$ (the analogue of this statement in the deterministic case suffices to derive a contradiction), but now I don't really see how to continue. Is my choice of $A$ here correct, or should I be using a different approach?
Calculations In this question, I solved for the stresses on an spacecraft passing close to a black hole's event horizon. There isn't some magical barrier that you go through around an event horizon, you just can't get out; you probably wouldn't even notice. I can use the same process to calculate the stresses that a prison slightly inside the event horizon would face. Event horizon of a black hole The event horizon is the distance from a black hole where the escape velocity is equal to $c$. Escape velocity is $$v_e = \sqrt{\frac{2GM}{r}}.$$ A black hole with 40 billion solar masses will have mass $8\times10^{40}$ kg. Solving for r when $v_e=c$ gives$$r = \frac{2GM}{c^2} = \frac{2\cdot6.7\times10^{-11}\cdot2\times10^{38}}{\left(3\times10^{8}\right)^2} = 1\times10^{14} \text{ meters}.$$ Gravity as a function of distance from the black hole A person is 2 m tall, and 'orbiting' just inside the event horizon at $1\times10^{14}$ m from a black hole of mass $8\times10^{40}$ kg. The tidal acceleration between the head and feet of a 2 meter tall person due to the gravity of the black hole is $$\begin{align}a &= \frac{m_{hole}G}{(r+2)^2}-\frac{m_{hole}G}{r^2} \\&= 6.7\times10^{-11}\cdot8\times10^{40}\frac{1}{\left(100000000000002\right)^2}-\frac{1}{\left(1\times10^{14}\right)^2}\\&=-2\times10^{-11} \frac{\text{m}}{\text{s}^2}\end{align}$$ What would that do to a 1 km long cylinder? Conveniently, in my other question, I calculated the tension on a 1km long cylindrical object. Conveniently, this could be a pretty reasonable prison space station. Near the ergosphere of a black hole, the tension forces would destroy any known object, but what about at the even horizon? Following the same math in the other question, and with the same structural assumptions, I get the differential stress on any slice of the prison/station: $$\frac{dF_{slice}}{dl} = \frac{2\times10^{35}}{(1\times10^{14}+l)^2}.$$ Total net force on the rod is $2\times10^{10}$ N, and maximum stress of about $1\times10^{12}$ N. Working backwards using the equation for gravity, we see that the assumption is that the station has a mass of 40000 tons. Depending on the cross-sectional area of the load bearing parts of your station, we can calculate the stresses. If your station is a cylinder 100m in radius, and 1/10 of the available area is taken up by load bearing structures, then the maximum stress on the 3000 m$^2$ of load bearing structure is about 6 MPa. A common structural steel has a yield strength in tension of about 250 MPA, so this isn't too much. If you have the technology to build space stations inside a black hole, then it is reasonable that you could construct it out of materials that won't fall apart. The second question is how long you can maintain your orbit. Using simple Newtonian mechanics (Warning! Not valid near a singularity!) the gravitational pull of $2\times10^{10}$ N will have to be counter-acted by thrust. Now, that is a lot of thrust, about three orders of magnitude greater than a Saturn V. I suppose it really depends what sort of propulsion system you have. Thrust as a function of mass flow rate is given as $T = v\frac{dm}{dt}$. Assuming exhaust at the speed of light from some magical propulsion system, you still need 70 kg tons of propellant passed every second to keep from falling into the black hole. Conclusions Given the small tidal acceleration, even a large object (1 km long, 25000 tons) could reasonably be kept together with known materials at the event horizon of such a large black hole. As for keeping such an object in orbit, for any propulsion system with reaction mass, the propellant usage would be very large (about 250 tons per hour, as calculated above). Given that the mass of the whole station is 40,000 tons, you would burn through the entire station's mass in a week. Just like the tyranny of the rocket equations, the tyranny of a black hole's gravity is oppressive: the more propellant you keep on board, the harder you are pulled in and the more propellant you need. I suppose you could be refueled with propellant, but that is a lot of money to be literally throwing into a black hole. For some sort of reaction-less system, well, I don't know how to measure that. You can't gain momentum out of black hole, so I don't know how you could calculate the thrust given off by a photonic engine. In any case, the 'not falling into the hole' part seems to be the catch, with any propellant based system. Doesn't seem very reasonable, given relatively hard science constraints.
Skills to Develop Solve an equation with constants on both sides Solve an equation with variables on both sides Solve an equation with variables and constants on both sides Solve equations using a general strategy Solve an Equation with Constants on Both Sides You may have noticed that in all the equations we have solved so far, all the variable terms were on only one side of the equation with the constants on the other side. This does not happen all the time—so now we’ll see how to solve equations where the variable terms and/or constant terms are on both sides of the equation. Our strategy will involve choosing one side of the equation to be the variable side, and the other side of the equation to be the constant side. Then, we will use the Subtraction and Addition Properties of Equality, step by step, to get all the variable terms together on one side of the equation and the constant terms together on the other side. By doing this, we will transform the equation that started with variables and constants on both sides into the form ax = b. We already know how to solve equations of this form by using the Division or Multiplication Properties of Equality. Solve: 4x + 6 = −14. Solution In this equation, the variable is only on the left side. It makes sense to call the left side the variable side. Therefore, the right side will be the constant side. We’ll write the labels above the equation to help us remember what goes where. Since the left side is the variable side, the 6 is out of place. We must "undo" adding 6 by subtracting 6, and to keep the equality we must subtract 6 from both sides. Use the Subtraction Property of Equality. $$4x + 6 \textcolor{red}{-6} = -14 \textcolor{red}{-6}$$ Simplify. $$4x = -20$$ Now all the x's are on the left and the constant on the right. Use the Division Property of Equality. $$\frac{4x}{\textcolor{red}{4}} = \frac{-20}{\textcolor{red}{4}}$$ Simplify. $$x = -5$$ Check: Let x = −5. $$\begin{split} 4x + 6 &= -14 \\ 4(\textcolor{red}{-5}) + 6 &= -14 \\ -20 + 6 &= -14 \\ -14 &= -14\; \checkmark \end{split}$$ Exercise 8.39: Solve: 3x + 4 = −8. Exercise 8.40: Solve: 5a + 3 = −37. Example 8.21: Solve: 2y − 7 = 15. Solution Notice that the variable is only on the left side of the equation, so this will be the variable side and the right side will be the constant side. Since the left side is the variable side, the 7 is out of place. It is subtracted from the 2y, so to ‘undo’ subtraction, add 7 to both sides. Add 7 to both sides. $$2y - 7 \textcolor{red}{+7} = 15 \textcolor{red}{+7}$$ Simplify. $$2y = 22$$ The variables are now on one side and the constants on the other. Divide both sides by 2. $$\frac{2y}{\textcolor{red}{2}} = \frac{22}{\textcolor{red}{2}}$$ Simplify. $$y = 11$$ Check: Substitute: y = 11. $$\begin{split} 2y - 7 &= 15 \\ 2 \cdot \textcolor{red}{11} - 7 &\stackrel{?}{=} 15 \\ 22 - 7 &\stackrel{?}{=} 15 \\ 15 &= 15\; \checkmark \end{split}$$ Exercise 8.41: Solve: 5y − 9 = 16. Exercise 8.42: Solve: 3m − 8 = 19. Solve an Equation with Variables on Both Sides What if there are variables on both sides of the equation? We will start like we did above—choosing a variable side and a constant side, and then use the Subtraction and Addition Properties of Equality to collect all variables on one side and all constants on the other side. Remember, what you do to the left side of the equation, you must do to the right side too. Example 8.22: Solve: 5x = 4x + 7. Solution Here the variable, x, is on both sides, but the constants appear only on the right side, so let’s make the right side the “constant” side. Then the left side will be the “variable” side. We don't want any variables on the right, so subtract the 4x. $$5x \textcolor{red}{-4x} = 4x \textcolor{red}{-4x} + 7$$ Simplify. $$x = 7$$ We have all the variables on one side and the constants on the other. We have solved the equation. Check: Substitute 7 for x. $$\begin{split} 5x &= 4x + 7 \\ 5(\textcolor{red}{7}) &\stackrel{?}{=} 4(\textcolor{red}{7}) + 7 \\ 35 &\stackrel{?}{=} 28 + 7 \\ 35 &= 35\; \checkmark \end{split}$$ Exercise 8.43: Solve: 6n = 5n + 10. Exercise 8.44: Solve: −6c = −7c + 1. Example 8.23: Solve: 5y − 8 = 7y. Solution The only constant, −8, is on the left side of the equation and variable, y, is on both sides. Let’s leave the constant on the left and collect the variables to the right. Subtract 5y from both sides. $$5y \textcolor{red}{-5y} -8 = 7y \textcolor{red}{-5y}$$ Simplify. $$-8 = 2y$$ We have the variables on the right and the constants on the left. Divide both sides by 2. $$\frac{-8}{\textcolor{red}{2}} = \frac{2y}{\textcolor{red}{2}}$$ Simplify. $$-4 = y$$ Rewrite with the variable on the left. $$y = -4$$ Check: Let y = −4. $$\begin{split} 5y - 8 &= 7y \\ 5(\textcolor{red}{-4}) -8 &\stackrel{?}{=} 7(\textcolor{red}{-4}) \\ -20 - 8 &\stackrel{?}{=} -28 \\ -28 &= -28\; \checkmark \end{split}$$ Exercise 8.45: Solve: 3p − 14 = 5p. Exercise 8.46: Solve: 8m + 9 = 5m. Example 8.24: Solve: 7x = − x + 24. Solution The only constant, 24, is on the right, so let the left side be the variable side. Remove the −x from the right side by adding x to both sides. $$7x \textcolor{red}{+x} = -x \textcolor{red}{+x} + 24$$ Simplify. $$8x = 24$$ All the variables are on the left and the constants are on the right. Divide both sides by 8. $$\frac{8x}{\textcolor{red}{8}} = \frac{24}{\textcolor{red}{8}}$$ Simplify. $$x = 3$$ Check: Substitute x = 3. $$\begin{split} 7x &= -x + 24 \\ 7(\textcolor{red}{3}) &\stackrel{?}{=} -(\textcolor{red}{3}) + 24 \\ 21 &= 21\; \checkmark \end{split}$$ Exercise 8.47: Solve: 12j = −4j + 32. Exercise 8.48: Solve: 8h = −4h + 12. Solve Equations with Variables and Constants on Both Sides The next example will be the first to have variables and constants on both sides of the equation. As we did before, we’ll collect the variable terms to one side and the constants to the other side. Example 8.25: Solve: 7x + 5 = 6x + 2. Solution Start by choosing which side will be the variable side and which side will be the constant side. The variable terms are 7x and 6x. Since 7 is greater than 6, make the left side the variable side and so the right side will be the constant side. Collect the variable terms to the left side by subtracting 6x from both sides. $$7x \textcolor{red}{-6x} + 5 = 6x \textcolor{red}{-6x} +2$$ Simplify. $$x + 5 = 2$$ Now, collect the constants to the right side by subtracting 5 from both sides. $$x + 5 \textcolor{red}{-5} = 2 \textcolor{red}{-5}$$ Simplify. $$x = -3$$ The solution is x = −3. Check: Let x = −3. $$\begin{split} 7x + 5 &= 6x + 2 \\ 7(\textcolor{red}{-3}) + 5 &\stackrel{?}{=} 6(\textcolor{red}{-3}) + 2 \\ -21 + 5 &\stackrel{?}{=} -18 + 2 \\ -16 &= -16\; \checkmark \end{split}$$ Exercise 8.49: Solve: 12x + 8 = 6x + 2. Exercise 8.50: Solve: 9y + 4 = 7y + 12. We’ll summarize the steps we took so you can easily refer to them. HOW TO: SOLVE AN EQUATION WITH VARIABLES AND CONSTANTS ON BOTH SIDES Step 1. Choose one side to be the variable side and then the other will be the constant side. Step 2. Collect the variable terms to the variable side, using the Addition or Subtraction Property of Equality. Step 3. Collect the constants to the other side, using the Addition or Subtraction Property of Equality. Step 4. Make the coefficient of the variable 1, using the Multiplication or Division Property of Equality. Step 5. Check the solution by substituting it into the original equation. It is a good idea to make the variable side the one in which the variable has the larger coefficient. This usually makes the arithmetic easier. Example 8.26: Solve: 6n − 2 = −3n + 7. Solution We have 6n on the left and −3n on the right. Since 6 > − 3, make the left side the “variable” side. We don't want variables on the right side—add 3n to both sides to leave only constants on the right. $$6n \textcolor{red}{+3n} - 2 = -3n \textcolor{red}{+3n} +7$$ Combine like terms. $$9n - 2 = 7$$ We don't want any constants on the left side, so add 2 to both sides. $$9n - 2 \textcolor{red}{+2} = 7 \textcolor{red}{+2}$$ Simplify. $$9n = 9$$ The variable term is on the left and the constant term is on the right. To get the coefficient of n to be one, divide both sides by 9. $$\frac{9n}{\textcolor{red}{9}} = \frac{9}{\textcolor{red}{9}}$$ Simplify. $$n = 1$$ Check: Substitute 1 for n. $$\begin{split} 6n - 2 &= -3n + 7 \\ 6(\textcolor{red}{1}) - 2 &\stackrel{?}{=} + 7 \\ 4 &= 4\; \checkmark \end{split}$$ Exercise 8.51: Solve: 8q − 5 = −4q + 7. Exercise 8.52: Solve: 7n − 3 = n + 3. Example 8.27: Solve: 2a − 7 = 5a + 8. Solution This equation has 2a on the left and 5a on the right. Since 5 > 2, make the right side the variable side and the left side the constant side. Subtract 2a from both sides to remove the variable term from the left. $$2a \textcolor{red}{-2a} - 7 = 5a \textcolor{red}{-2a} + 8$$ Combine like terms. $$-7 = 3a + 8$$ Subtract 8 from both sides to remove the constant from the right. $$-7 \textcolor{red}{-8} = 3a + 8 \textcolor{red}{-8}$$ Simplify. $$-15 = 3a$$ Divide both sides by 3 to make 1 the coefficient of a. $$\frac{-15}{\textcolor{red}{3}} = \frac{3a}{\textcolor{red}{3}}$$ Simplify. $$-5 = a$$ Check: Let a = −5. $$\begin{split} 2a - 7 &= 5a + 8 \\ 2(\textcolor{red}{-5}) - 7 &\stackrel{?}{=} 5(\textcolor{red}{-5}) + 8 \\ -10 - 7 &\stackrel{?}{=} -25 + 8 \\ -17 &= -17\; \checkmark \end{split}$$ Note that we could have made the left side the variable side instead of the right side, but it would have led to a negative coefficient on the variable term. While we could work with the negative, there is less chance of error when working with positives. The strategy outlined above helps avoid the negatives! Exercise 8.53: Solve: 2a − 2 = 6a + 18. Exercise 8.54: Solve: 4k − 1 = 7k + 17. To solve an equation with fractions, we still follow the same steps to get the solution. Example 8.28: Solve: \(\frac{3}{2}\)x + 5 = \(\frac{1}{2}\)x − 3. Solution Since \(\frac{3}{2} > \frac{1}{2}\), make the left side the variable side and the right side the constant side. Subtract \(\frac{1}{2}\)x from both sides. $$\frac{3}{2} x \textcolor{red}{- \frac{1}{2} x} + 5 = \frac{1}{2} x \textcolor{red}{\frac{1}{2} x} - 3$$ Combine like terms. $$x + 5 = -3$$ Subtract 5 from both sides. $$x + 5 \textcolor{red}{-5} = -3 \textcolor{red}{-5}$$ Simplify. $$x = -8$$ Check: Let x = −8. $$\begin{split} \frac{3}{2} x + 5 &= \frac{1}{2} x - 3 \\ \frac{3}{2} (\textcolor{red}{-8}) + 5 &\stackrel{?}{=} \frac{1}{2} (\textcolor{red}{-8}) - 3 \\ -12 + 5 &\stackrel{?}{=} -4 - 3 \\ -7 &= -7\; \checkmark \end{split}$$ Exercise 8.55: Solve: \(\frac{7}{8}\)x - 12 = \(- \frac{1}{8}\)x − 2. Exercise 8.56: Solve: \(\frac{7}{6}\)y + 11 = \(\frac{1}{6}\)y + 8. We follow the same steps when the equation has decimals, too. Example 8.29: Solve: 3.4x + 4 = 1.6x − 5. Solution Since 3.4 > 1.6, make the left side the variable side and the right side the constant side. Subtract 1.6x from both sides. $$3.4x \textcolor{red}{-1.6x} + 4 = 1.6x \textcolor{red}{-1.6x} - 5$$ Combine like terms. $$1.8x + 4 = -5$$ Subtract 4 from both sides. $$1.8x + 4 \textcolor{red}{-4} = -5 \textcolor{red}{-4}$$ Simplify. $$1.8x = -9$$ Use the Division Property of Equality. $$\frac{1.8x}{\textcolor{red}{1.8}} = \frac{-9}{\textcolor{red}{1.8}}$$ Simplify. $$x = -5$$ Check: Let x = −5. $$\begin{split} 3.4x + 4 &= 1.6x - 5 \\ 3.4(\textcolor{red}{-5}) + 4 &\stackrel{?}{=} 1.6(\textcolor{red}{-5}) - 5 \\ -17 + 4 &\stackrel{?}{=} -8 - 5 \\ -13 &= -13\; \checkmark \end{split}$$ Exercise 8.57: Solve: 2.8x + 12 = −1.4x − 9. Exercise 8.58: Solve: 3.6y + 8 = 1.2y − 4. Contributors Lynn Marecek (Santa Ana College) and MaryAnne Anthony-Smith (Formerly of Santa Ana College). This content is licensed under Creative Commons Attribution License v4.0 "Download for free at http://cnx.org/contents/fd53eae1-fa2...49835c3c@5.191."
Equivalence of Definitions of Curvature Jump to navigation Jump to search Theorem $\kappa = \dfrac {\d \psi} {\d s}$ where: Let $C$ be embedded in a cartesian plane. $\kappa = \dfrac {y''} {\paren {1 + y'^2}^{3/2} }$ where: $y' = \dfrac {\d y} {\d x}$ is the derivative of $y$ with respect to $x$ at $P$ $y'' = \dfrac {\d^2 y} {\d x^2}$ is the second derivative of $y$ with respect to $x$ at $P$. $\begin{cases} x = \map x t \\ y = \map y t \end{cases}$ $\kappa = \dfrac {x' y'' - y' x''} {\tuple {x'^2 + y'^2}^{3/2} }$ where: $x' = \dfrac {\d x} {\d t}$ is the derivative of $x$ with respect to $t$ at $P$ $y' = \dfrac {\d y} {\d t}$ is the derivative of $y$ with respect to $t$ at $P$ $x''$ and $y''$ are the second derivatives of $x$ and $y$ with respect to $t$ at $P$. Proof $\kappa = \dfrac {\d \psi} {\d s}$ where: That is: \(\displaystyle \frac {\d} {\d \psi} \tan \psi\) \(=\) \(\displaystyle \frac {\d} {\d \psi} \frac {\d y} {\d x}\) \(\displaystyle \leadsto \ \ \) \(\displaystyle \sec^2 \psi\) \(=\) \(\displaystyle \frac {\d y'} {\d \psi}\) Derivative of Tangent Function \(\displaystyle \leadsto \ \ \) \(\displaystyle 1 + \tan^2 \psi\) \(=\) \(\displaystyle \frac {\d y'} {\d \psi}\) Difference of Squares of Secant and Tangent \(\displaystyle \leadsto \ \ \) \(\displaystyle 1 + y'^2\) \(=\) \(\displaystyle \frac {\d y'} {\d \psi}\) \(\displaystyle \leadsto \ \ \) \(\displaystyle \frac 1 {1 + y'^2}\) \(=\) \(\displaystyle \frac {\d \psi} {\d y'}\) We also have that: \(\displaystyle \d s\) \(=\) \(\displaystyle \sqrt {\d x^2 + \d y^2}\) \(\displaystyle \leadsto \ \ \) \(\displaystyle \frac {\d s} {\d x}\) \(=\) \(\displaystyle \sqrt {\paren {\frac {\d x} {\d x} }^2 + \paren {\frac {\d y} {\d x} }^2}\) \(\displaystyle \) \(=\) \(\displaystyle \sqrt {1 + y'^2}\) \(\displaystyle \leadsto \ \ \) \(\displaystyle \frac {\d x} {\d s}\) \(=\) \(\displaystyle \frac 1 {\paren {1 + y'^2}^{1/2} }\) Then: \(\displaystyle \kappa\) \(=\) \(\displaystyle \dfrac {\d \psi} {\d s}\) \(\displaystyle \) \(=\) \(\displaystyle \dfrac {\d \psi} {\d y'} \dfrac {\d y'} {\d x} \dfrac {\d x} {\d s}\) Chain Rule \(\displaystyle \) \(=\) \(\displaystyle \frac 1 {1 + y'^2} y'' \frac 1 {\paren {1 + y'^2}^{1/2} }\) \(\displaystyle \) \(=\) \(\displaystyle \dfrac {y''} {\paren {1 + y'^2}^{3/2} }\) $\blacksquare$
2019-10-11 14:20 TURBO stream animation /LHCb Collaboration An animation illustrating the TURBO stream is provided. It shows events discarded by the trigger in quick sequence, followed by an event that is kept but stripped of all data except four tracks [...] LHCB-FIGURE-2019-010.- Geneva : CERN, 2019 - 3. Registo detalhado - Registos similares 2019-10-10 15:48 Registo detalhado - Registos similares 2019-09-12 16:43 Pending/LHCb Collaboration Pending LHCB-FIGURE-2019-008.- Geneva : CERN, 10 Registo detalhado - Registos similares 2019-09-10 11:06 Smog2 Velo tracking efficiency/LHCb Collaboration LHCb fixed-target programme is facing a major upgrade (Smog2) for Run3 data taking consisting in the installation of a confinement cell for the gas covering $z \in [-500, -300] \, mm $. Such a displacement for the $pgas$ collisions with respect to the nominal $pp$ interaction point requires a detailed study of the reconstruction performances. [...] LHCB-FIGURE-2019-007.- Geneva : CERN, 10 - 4. Fulltext: LHCb-FIGURE-2019-007_2 - PDF; LHCb-FIGURE-2019-007 - PDF; Registo detalhado - Registos similares 2019-09-09 14:37 Background rejection study in the search for $\Lambda^0 \rightarrow p^+ \mu^- \overline{\nu}$/LHCb Collaboration A background rejection study has been made using LHCb Simulation in order to investigate the capacity of the experiment to distinguish between $\Lambda^0 \rightarrow p^+ \mu^- \overline{\nu}$ and its main background $\Lambda^0 \rightarrow p^+ \pi^-$. Two variables were explored, and their rejection power was estimated applying a selection criteria. [...] LHCB-FIGURE-2019-006.- Geneva : CERN, 09 - 4. Fulltext: PDF; Registo detalhado - Registos similares 2019-09-06 14:56 Tracking efficiencies prior to alignment corrections from 1st Data challenges/LHCb Collaboration These plots show the first outcoming results on tracking efficiencies, before appli- cation of alignment corrections, as obtained from the 1st data challenges tests. In this challenge, several tracking detectors (the VELO, SciFi and Muon) have been misaligned and the effects on the tracking efficiencies are studied. [...] LHCB-FIGURE-2019-005.- Geneva : CERN, 2019 - 5. Fulltext: PDF; Registo detalhado - Registos similares 2019-09-06 11:34 Registo detalhado - Registos similares 2019-09-02 15:30 First study of the VELO pixel 2 half alignment/LHCb Collaboration A first look into the 2 half alignment for the Run 3 Vertex Locator (VELO) has been made. The alignment procedure has been run on a minimum bias Monte Carlo Run 3 sample in order to investigate its functionality [...] LHCB-FIGURE-2019-003.- Geneva : CERN, 02 - 4. Fulltext: VP_alignment_approval - TAR; VELO_plot_approvals_VPAlignment_v3 - PDF; Registo detalhado - Registos similares 2019-07-29 14:20 Registo detalhado - Registos similares 2019-07-09 09:53 Variation of VELO Alignment Constants with Temperature/LHCb Collaboration A study of the variation of the alignment constants has been made in order to investigate the variations of the LHCb Vertex Locator (VELO) position under different set temperatures between $-30^\circ$ and $-20^\circ$. Alignment for both the translations and rotations of the two halves and of the modules with certain constrains of the modules position was performed for each run that correspond to different a temperature [...] LHCB-FIGURE-2019-001.- Geneva : CERN, 04 - 4. Fulltext: PDF; Related data file(s): ZIP; Registo detalhado - Registos similares
I apologize in advance for the length. The equation $\sin x = (\log x)^{-1}$ has exactly one solution $x_n$ in the interval $(2\pi n,2\pi n + \pi/2)$ for $n \geq 1$, and the exercise (de Bruijn, Asymptotic Methods in Analysis, ch. 2) asks me to show that $$ x_n = 2\pi n + (\log 2\pi n)^{-1} + O((\log 2 \pi n)^{-3}). $$ To start, we have $0 < x_n - 2 \pi n < 1$ and $$ \sin x_n = \sin (x_n - 2 \pi n) = (\log x_n)^{-1} \to 0, $$ so $x_n \to 2 \pi n$. It would make sense then to make the substitution $x_n = z + t$, where $t = 2 \pi n$, so that we're now concerned with finding the asymptotic behavior of $z$ in terms of $t$ as $t \to \infty$ in the equation $$ \sin z = (\log (z + t))^{-1}. $$ That is, we want to show that $$ z = (\log t)^{-1} + O((\log t)^{-3}). $$ So far I've only been able to show that $z = O((\log t)^{-1})$ through arguments which are probably not sound. I've tried to apply the Lagrange Inversion Formula but I can't seem to get it into the right form. If we let $w = \log t$, then $w = \frac{z}{f(z)}$, where $$ f(z) = \frac{z}{\log(e^{1/\sin z} - z)}. $$ But $f(0) = 0$ (and, probably more importantly, $f$ isn't analytic at $0$), so I can't apply Lagrange. Of course there may be a "correct" way to rearrange the equation to put it into Lagrange form. I've also considered applying Newton's method, but I don't know if that's valid. Applying the method to $(\log (z + t))^{-1} - \sin z$ with $z_0 = 0$ I get $$ z_1 = - (\log t)^{-1} ( 1 + O((t \log t)^{-2})), $$ which at least has the right asymptotic behavior in the first term. Trying to iterate using, for example, $x_0 = (\log t)^{-1}$ in the hopes of getting more stable terms leads me to a wall of computation, and I doubt that's the goal of the problem. More importantly, even if I did get a stable asymptotic series as I continued to iterate, I don't know whether I'm actually converging to the actual root of the equation. Lastly I should mention that I've also tried letting $z = x_n(2 \pi n)^{-1} - 1$, but this didn't seem to lead to anywhere helpful. Any tips?
In the previous section, we evaluated limits by looking at graphs or by constructing a table of values. In this section, we establish laws for calculating limits and learn how to apply these laws. In the Student Project at the end of this section, you have the opportunity to apply these limit laws to derive the formula for the area of a circle by adapting a method devised by the Greek mathematician Archimedes. We begin by restating two useful limit results from the previous section. These two results, together with the limit laws, serve as a foundation for calculating many limits. Evaluating Limits with the Limit Laws The first two limit laws were stated previosuly and we repeat them here. These basic results, together with the other limit laws, allow us to evaluate limits of many algebraic functions. Basic Limit Results For any real number \(a\) and any constant \(c\), \(\displaystyle \lim_{x→a}x=a\) \(\displaystyle \lim_{x→a}c=c\) Example \(\PageIndex{1}\): Evaluating a Basic Limit Evaluate each of the following limits using Note. \(\displaystyle \lim_{x→2}x\) \(\displaystyle \lim_{x→2}5\) Solution: The limit of x as x approaches a is a: \(\displaystyle \lim_{x→2}x=2\). The limit of a constant is that constant: \(\displaystyle \lim_{x→2}5=5\). We now take a look at the limit laws, the individual properties of limits. The proofs that these laws hold are omitted here. Limit Laws Let \(f(x)\) and \(g(x)\) be defined for all \(x≠a\) over some open interval containing \(a\). Assume that \(L\) and \(M\) are real numbers such that \(\displaystyle \lim_{x→a}f(x)=L\) and \(\displaystyle \lim_{x→a}g(x)=M\). Let \(c\) be a constant. Then, each of the following statements holds: Sum law for limits: \[\displaystyle \lim_{x→a}(f(x)+g(x))=\lim_{x→a}f(x)+\lim_{x→a}g(x)=L+M\] Difference law for limits: \[\displaystyle \lim_{x→a}(f(x)−g(x))=\lim_{x→a}f(x)−\lim_{x→a}g(x)=L−M\] Constant multiple law for limits: \[\displaystyle \lim_{x→a}cf(x)=c⋅\lim_{x→a}f(x)=cL\] Product law for limits: \[\displaystyle \lim_{x→a}(f(x)⋅g(x))=\lim_{x→a}f(x)⋅\lim_{x→a}g(x)=L⋅M\] Quotient law for limits: \[\displaystyle \lim_{x→a}\frac{f(x)}{g(x)}=\frac{\displaystyle \lim_{x→a}f(x)}{\displaystyle \lim_{x→a}g(x)}=\frac{L}{M}\] for \(M≠0\). Power law for limits: \[\displaystyle \lim_{x→a}(f(x))^n=(\lim_{x→a}f(x))^n=L^n\] for every positive integer \(n\). Root law for limits: \[\displaystyle \lim_{x→a}\sqrt[n]{f(x)}=\sqrt[n]{\lim_{x→a}f(x)}=\sqrt[n]{L}\] for all \(L\) if \(n\) is odd and for \(L≥0\) if \(n\) is even. We now practice applying these limit laws to evaluate a limit. Example \(\PageIndex{2A}\): Evaluating a Limit Using Limit Laws Use the limit laws to evaluate \[\lim_{x→−3}(4x+2). \nonumber\] Solution Let’s apply the limit laws one step at a time to be sure we understand how they work. We need to keep in mind the requirement that, at each application of a limit law, the new limits must exist for the limit law to be applied. \(\displaystyle \lim_{x→−3}(4x+2)\) = \(\displaystyle \lim_{x→−3} 4x + \lim_{x→−3} 2\) Apply the sum law. =\(\displaystyle 4⋅\lim_{x→−3} x + \lim_{x→−3} 2\) Apply the constant multiple law. =\(4⋅(−3)+2=−10.\) Apply the basic limit results and simplify. Notice this is equivalent to substituting \(-3\) for \(x\) in the original function. One just needs to be careful that the limit exists at this point. Example \(\PageIndex{2B}\): Using Limit Laws Repeatedly Use the limit laws to evaluate \[\lim_{x→2}\frac{2x^2−3x+1}{x^3+4}. \nonumber\] Solution To find this limit, we need to apply the limit laws several times. Again, we need to keep in mind that as we rewrite the limit in terms of other limits, each new limit must exist for the limit law to be applied. \(\displaystyle \lim_{x→2}\frac{2x^2−3x+1}{x^3+4}=\frac{\displaystyle \lim_{x→2}(2x^2−3x+1)}{\displaystyle \lim_{x→2}(x^3+4)}\) Apply the quotient law, make sure that \((2)^3+4≠0.\) =\(\displaystyle \frac{\displaystyle 2⋅\lim_{x→2}x^2−3⋅\lim_{x→2}x+\lim_{x→2}1}{\displaystyle \lim_{x→2}x^3+\lim_{x→2}4}\) Apply the sum law and constant multiple law =\(\displaystyle \frac{\displaystyle 2⋅(\lim_{x→2}x)^2−3⋅\lim_{x→2}x+\lim_{x→2}1}{\displaystyle (\lim_{x→2}x)^3+\lim_{x→2}4}\) Apply the power law. =\(\displaystyle \frac{2(4)−3(2)+1}{(2)^3+4}=\frac{1}{4}\). Apply the basic limit laws and simplify. Notice this is equivalent to substituting \(2\) for \(x\) in the original function. One just needs to be careful that the limit exists at this point. Exercise \(\PageIndex{2}\) Use the limit laws to evaluate \(\displaystyle \lim_{x→6}(2x−1)\sqrt{x+4}\). In each step, indicate the limit law applied. Hint Begin by applying the product law. Or just substitute \(6\) for \(x\) in the original function. One just needs to be careful that the limit exists at this point. Answer \(11\sqrt{10}\)
2018-08-25 06:58 Recent developments of the CERN RD50 collaboration / Menichelli, David (U. Florence (main) ; INFN, Florence)/CERN RD50 The objective of the RD50 collaboration is to develop radiation hard semiconductor detectors for very high luminosity colliders, particularly to face the requirements of the possible upgrade of the large hadron collider (LHC) at CERN. Some of the RD50 most recent results about silicon detectors are reported in this paper, with special reference to: (i) the progresses in the characterization of lattice defects responsible for carrier trapping; (ii) charge collection efficiency of n-in-p microstrip detectors, irradiated with neutrons, as measured with different readout electronics; (iii) charge collection efficiency of single-type column 3D detectors, after proton and neutron irradiations, including position-sensitive measurement; (iv) simulations of irradiated double-sided and full-3D detectors, as well as the state of their production process.. 2008 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 596 (2008) 48-52 In : 8th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 27 - 29 Jun 2007, pp.48-52 レコードの詳細 - ほとんど同じレコード 2018-08-25 06:58 レコードの詳細 - ほとんど同じレコード 2018-08-25 06:58 Performance of irradiated bulk SiC detectors / Cunningham, W (Glasgow U.) ; Melone, J (Glasgow U.) ; Horn, M (Glasgow U.) ; Kazukauskas, V (Vilnius U.) ; Roy, P (Glasgow U.) ; Doherty, F (Glasgow U.) ; Glaser, M (CERN) ; Vaitkus, J (Vilnius U.) ; Rahman, M (Glasgow U.)/CERN RD50 Silicon carbide (SiC) is a wide bandgap material with many excellent properties for future use as a detector medium. We present here the performance of irradiated planar detector diodes made from 100-$\mu \rm{m}$-thick semi-insulating SiC from Cree. [...] 2003 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 509 (2003) 127-131 In : 4th International Workshop on Radiation Imaging Detectors, Amsterdam, The Netherlands, 8 - 12 Sep 2002, pp.127-131 レコードの詳細 - ほとんど同じレコード 2018-08-24 06:19 Measurements and simulations of charge collection efficiency of p$^+$/n junction SiC detectors / Moscatelli, F (IMM, Bologna ; U. Perugia (main) ; INFN, Perugia) ; Scorzoni, A (U. Perugia (main) ; INFN, Perugia ; IMM, Bologna) ; Poggi, A (Perugia U.) ; Bruzzi, M (Florence U.) ; Lagomarsino, S (Florence U.) ; Mersi, S (Florence U.) ; Sciortino, Silvio (Florence U.) ; Nipoti, R (IMM, Bologna) Due to its excellent electrical and physical properties, silicon carbide can represent a good alternative to Si in applications like the inner tracking detectors of particle physics experiments (RD50, LHCC 2002–2003, 15 February 2002, CERN, Ginevra). In this work p$^+$/n SiC diodes realised on a medium-doped ($1 \times 10^{15} \rm{cm}^{−3}$), 40 $\mu \rm{m}$ thick epitaxial layer are exploited as detectors and measurements of their charge collection properties under $\beta$ particle radiation from a $^{90}$Sr source are presented. [...] 2005 - 4 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 546 (2005) 218-221 In : 6th International Workshop on Radiation Imaging Detectors, Glasgow, UK, 25-29 Jul 2004, pp.218-221 レコードの詳細 - ほとんど同じレコード 2018-08-24 06:19 Measurement of trapping time constants in proton-irradiated silicon pad detectors / Krasel, O (Dortmund U.) ; Gossling, C (Dortmund U.) ; Klingenberg, R (Dortmund U.) ; Rajek, S (Dortmund U.) ; Wunstorf, R (Dortmund U.) Silicon pad-detectors fabricated from oxygenated silicon were irradiated with 24-GeV/c protons with fluences between $2 \cdot 10^{13} \ n_{\rm{eq}}/\rm{cm}^2$ and $9 \cdot 10^{14} \ n_{\rm{eq}}/\rm{cm}^2$. The transient current technique was used to measure the trapping probability for holes and electrons. [...] 2004 - 8 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 3055-3062 In : 50th IEEE 2003 Nuclear Science Symposium, Medical Imaging Conference, 13th International Workshop on Room Temperature Semiconductor Detectors and Symposium on Nuclear Power Systems, Portland, OR, USA, 19 - 25 Oct 2003, pp.3055-3062 レコードの詳細 - ほとんど同じレコード 2018-08-24 06:19 Lithium ion irradiation effects on epitaxial silicon detectors / Candelori, A (INFN, Padua ; Padua U.) ; Bisello, D (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Schramm, A (Hamburg U., Inst. Exp. Phys. II) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) ; Wyss, J (Cassino U. ; INFN, Pisa) Diodes manufactured on a thin and highly doped epitaxial silicon layer grown on a Czochralski silicon substrate have been irradiated by high energy lithium ions in order to investigate the effects of high bulk damage levels. This information is useful for possible developments of pixel detectors in future very high luminosity colliders because these new devices present superior radiation hardness than nowadays silicon detectors. [...] 2004 - 7 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 1766-1772 In : 13th IEEE-NPSS Real Time Conference 2003, Montreal, Canada, 18 - 23 May 2003, pp.1766-1772 レコードの詳細 - ほとんど同じレコード 2018-08-24 06:19 Radiation hardness of different silicon materials after high-energy electron irradiation / Dittongo, S (Trieste U. ; INFN, Trieste) ; Bosisio, L (Trieste U. ; INFN, Trieste) ; Ciacchi, M (Trieste U.) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; D'Auria, G (Sincrotrone Trieste) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) The radiation hardness of diodes fabricated on standard and diffusion-oxygenated float-zone, Czochralski and epitaxial silicon substrates has been compared after irradiation with 900 MeV electrons up to a fluence of $2.1 \times 10^{15} \ \rm{e} / cm^2$. The variation of the effective dopant concentration, the current related damage constant $\alpha$ and their annealing behavior, as well as the charge collection efficiency of the irradiated devices have been investigated.. 2004 - 7 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 530 (2004) 110-116 In : 6th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 29 Sep - 1 Oct 2003, pp.110-116 レコードの詳細 - ほとんど同じレコード 2018-08-24 06:19 Recovery of charge collection in heavily irradiated silicon diodes with continuous hole injection / Cindro, V (Stefan Inst., Ljubljana) ; Mandić, I (Stefan Inst., Ljubljana) ; Kramberger, G (Stefan Inst., Ljubljana) ; Mikuž, M (Stefan Inst., Ljubljana ; Ljubljana U.) ; Zavrtanik, M (Ljubljana U.) Holes were continuously injected into irradiated diodes by light illumination of the n$^+$-side. The charge of holes trapped in the radiation-induced levels modified the effective space charge. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 343-345 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.343-345 レコードの詳細 - ほとんど同じレコード 2018-08-24 06:19 First results on charge collection efficiency of heavily irradiated microstrip sensors fabricated on oxygenated p-type silicon / Casse, G (Liverpool U.) ; Allport, P P (Liverpool U.) ; Martí i Garcia, S (CSIC, Catalunya) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Turner, P R (Liverpool U.) Heavy hadron irradiation leads to type inversion of n-type silicon detectors. After type inversion, the charge collected at low bias voltages by silicon microstrip detectors is higher when read out from the n-side compared to p-side read out. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 340-342 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.340-342 レコードの詳細 - ほとんど同じレコード 2018-08-23 11:31 Formation and annealing of boron-oxygen defects in irradiated silicon and silicon-germanium n$^+$–p structures / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Korshunov, F P (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) ; Abrosimov, N V (Unlisted, DE) New findings on the formation and annealing of interstitial boron-interstitial oxygen complex ($\rm{B_iO_i}$) in p-type silicon are presented. Different types of n+−p structures irradiated with electrons and alpha-particles have been used for DLTS and MCTS studies. [...] 2015 - 4 p. - Published in : AIP Conf. Proc. 1583 (2015) 123-126 レコードの詳細 - ほとんど同じレコード
An Unfinished Example At the end of class today, someone asked if we could do another example of a partial fractions integral involving an irreducible quadratic. We decided to look at the integral $$ \int \frac{1}{(x^2 + 4)(x+1)}dx. $$ Notice that $latex {x^2 + 4}$ is an irreducible quadratic polynomial. So when setting up the partial fraction decomposition, we treat the $latex {x^2 + 4}$ term as a whole. So we seek to find a decomposition of the form $$ \frac{1}{(x^2 + 4)(x+1)} = \frac{A}{x+1} + \frac{Bx + C}{x^2 + 4}. $$ Now that we have the decomposition set up, we need to solve for $latex {A,B,}$ and $latex {C}$ using whatever methods we feel most comfortable with. Multiplying through by $latex {(x^2 + 4)(x+1)}$ leads to $$ 1 = A(x^2 + 4) + (Bx + C)(x+1) = (A + B)x^2 + (B + C)x + (4A + C). $$ Matching up coefficients leads to the system of equations $$\begin{align} 0 &= A + B \\ 0 &= B + C \\ 1 &= 4A + C. \end{align}$$ So we learn that $latex {A = -B = C}$, and $latex {A = 1/5}$. So $latex {B = -1/5}$ and $latex {C = 1/5}$. Together, this means that $$ \frac{1}{(x^2 + 4)(x+1)} = \frac{1}{5}\frac{1}{x+1} + \frac{1}{5} \frac{-x + 1}{x^2 + 4}. $$ Recall that if you wanted to, you could check this decomposition by finding a common denominator and checking through. Now that we have performed the decomposition, we can return to the integral. We now have that $$ \int \frac{1}{(x^2 + 4)(x+1)}dx = \underbrace{\int \frac{1}{5}\frac{1}{x+1}dx}_ {\text{first integral}} + \underbrace{\int \frac{1}{5} \frac{-x + 1}{x^2 + 4} dx.}_ {\text{second integral}} $$ We can handle both of the integrals on the right hand side. The first integral is $$ \frac{1}{5} \int \frac{1}{x+1} dx = \frac{1}{5} \ln (x+1) + C. $$ The second integral is a bit more complicated. It’s good to see if there is a simple $latex {u}$-substition, since there is an $latex {x}$ in the numerator and an $latex {x^2}$ in the denominator. But unfortunately, this integral needs to be further broken into two pieces that we know how to handle separately. $$ \frac{1}{5} \int \frac{-x + 1}{x^2 + 4} dx = \underbrace{\frac{-1}{5} \int \frac{x}{x^2 + 4}dx}_ {\text{first piece}} + \underbrace{\frac{1}{5} \int \frac{1}{x^2 + 4}dx.}_ {\text{second piece}} $$ The first piece is now a $latex {u}$-substitution problem with $latex {u = x^2 + 4}$. Then $latex {du = 2x dx}$, and so $$ \frac{-1}{5} \int \frac{x}{x^2 + 4}dx = \frac{-1}{10} \int \frac{du}{u} = \frac{-1}{10} \ln u + C = \frac{-1}{10} \ln (x^2 + 4) + C. $$ The second piece is one of the classic trig substitions. So we draw a triangle. In this triangle, thinking of the bottom-left angle as $latex {\theta}$ (sorry, I forgot to label it), then we have that $latex {2\tan \theta = x}$ so that $latex {2 \sec^2 \theta d \theta = dx}$. We can express the so-called hard part of the triangle by $latex {2\sec \theta = \sqrt{x^2 + 4}}$. Going back to our integral, we can think of $latex {x^2 + 4}$ as $latex {(\sqrt{x^2 + 4})^2}$ so that $latex {x^2 + 4 = (2 \sec \theta)^2 = 4 \sec^2 \theta}$. We can now write our integral as $$ \frac{1}{5} \int \frac{1}{x^2 + 4}dx = \frac{1}{5} \int \frac{1}{4 \sec^2 \theta} 2 \sec^2 \theta d \theta = \frac{1}{5} \int \frac{1}{2} d\theta = \frac{1}{10} \theta. $$ As $latex {2 \tan \theta = x}$, we have that $latex {\theta = \text{arctan}(x/2)}$. Inserting this into our expression, we have $$ \frac{1}{10} \int \frac{1}{x^2 + 4} dx = \frac{1}{10} \text{arctan}(x/2) + C. $$ Combining the first integral and the first and second parts of the second integral together (and combining all the constants $latex {C}$ into a single constant, which we also denote by $latex {C}$), we reach the final expression $$ \int \frac{1}{(x^2 + 4)(x + 1)} dx = \frac{1}{5} \ln (x+1) – \frac{1}{10} \ln(x^2 + 4) + \frac{1}{10} \text{arctan}(x/2) + C. $$ And this is the answer. Other Notes If you have any questions or concerns, please let me know. As a reminder, I have office hours on Tuesday from 9:30–11:30 (or perhaps noon) in my office, and I highly recommend attending the Math Resource Center in the Kassar House from 8pm-10pm, offered Monday-Thursday. [Especially on Tuesday and Thursdays, when there tend to be fewer people there]. On my course page, I have linked to two additional resources. One is to Paul’s Online Math notes for partial fraction decomposition (which I think is quite a good resource). The other is to the Khan Academy for some additional worked through examples on polynomial long division, in case you wanted to see more worked examples. This note can also be found on my website, or in pdf form. Good luck, and I’ll see you in class.
Note that the usual problem, and quite difficult if you do not know what to expect, is slightly different, https://en.wikipedia.org/wiki/Vieta_jumping#Example_2 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Define integer $k \geq 3$ as$$k = \frac{x^2+y^2-1}{xy}$$We are solving$$ x^2 - kxy + y^2 = 1. $$The discriminant of the quadratic form is $\Delta = k^2 - 4$ which is positive but not a square. As a result, if we can solve $\tau^2 - \Delta \sigma^2$ we can construct the generator of the (oriented) automorphism group of the form. Now, $k^2 - \Delta = 4,$ so $\tau = \pm k, \sigma = \pm 1.$ Given a form (positive nonsquare discriminant) of coefficients $\langle A,B,C\rangle$ we can take the generator (in $SL_2 \mathbb Z$) as$$\left(\begin{array}{rr}\frac{\tau - B \sigma}{2} && -C \sigma \\A \sigma && \frac{\tau + B \sigma}{2}\end{array}\right)$$With $\langle A,B,C\rangle = \langle 1,-k,1\rangle$ we get$$ U = \left(\begin{array}{rr}k && -1 \\1 && 0\end{array}\right)$$and will stick with this one, as convenient for keeping $x \geq y \geq 0.$The transformation from one solution $(x,y)$ to the next is matrix multiplication with the column vector $(x,y)^T$ on the right. That is,$$ \color{blue}{ (x,y) \mapsto (kx-y, x)}. $$The first few solutions with $k \geq 3$ are$$ (1,0), $$$$ (k,1), $$$$ (k^2 - 1,k), $$$$ (k^3 - 2k,k^2 - 1), $$$$ (k^4 - 3 k^2 + 1,k^3 - 2k). $$Oh, as far as actual division, $(1,0)$ is not legal in the original fraction. Life is like that. Let's try $k=3$ in the fraction:$(3,1)$ gives $9/3 = 3.$ $(8,3)$ gives $72/24 = 3.$ The final thing is Cayley-Hamilton. The matrix I named $U$ above satisfies $U^2 - k U + I = 0,$ or $U^2 = kU - I.$ As a result, both $x$ and $y$ satisfy linear recurrences,$$ x_{j+2} = k x_{j+1} - x_j, $$ $$ y_{j+2} = k y_{j+1} - y_j. $$Again with $k=3, $we get $x$ in$$ 1, 3, 8, 21, 55, 144, $$Note that these are every second Fibonacci number. With $k=4, $we get $x$ in$$ 1, 4, 15, 56, 209, 780, $$ With $k=5, $we get $x$ in$$ 1, 5, 24, 115, 551, 2640, $$ With your $k=2$ you have $x^2 - 2xy + y^2 = 1,$ or $(x-y)^2 = 1,$ or $x-y = \pm 1.$
Let $\{S_n\}$ and $\{T_n\}$ be independent renewal processes with interrenewal distributions $F$ and $G$. Define$$N(t):=N_S(t) + N_T(t)=\sum_{n=1}^\infty \left[\mathsf 1_{(0,t]}(S_n) +\mathsf 1_{(0,t]}(T_n)\right]. $$Then the sequence of jump times of $N(t)$, $$U_n = \inf\{t: N(t)=n\} $$is not in general a renewal sequence, because the inter-jump times need not be i.i.d. For a counterexample, consider when $F$ and $G$ are constant distributions, e.g. $S_n=\{i, 2i, 3i, \ldots\}$ and $T_n=\{j, 2j, 3j, \ldots\}$, with $i\ne j$. Take $i=2$ and $j=3$, then$$U_n-U_{n-1} = \begin{cases}1,& n\equiv 1,2,3,5\pmod 6\\2,& n\equiv 0,4\pmod 6.\end{cases}$$Further, we have that$$R(t) = \mathbb E[N(t)] \stackrel{t\to\infty}\longrightarrow \frac43, $$but as $R(n)-R(n-1)=U_n-U_{n-1}$, it is clear that $\lim_{t\to\infty}R(t+1)-R(t)$ does not exist, and thus Blackwell's renewal theorem does not hold. There are two remaining questions to consider - is there more we can say about $N(t)$ than that it is a counting process, and whether being stable under superposition is equivalent to having independent and stationary increments (i.e. being a Poisson process)? As for the first, we can describe the jump times $\{U_n\}$ by the transitions in a Markov renewal process on $$E=\{(X_n,t) : X_n\in \{S,T\}, t>0\}. $$ As for the second, the superposition of $\{S_n\}$ and $\{T_n\}$ is a renewal process iff one of the following holds: (i) One of the processes, WLOG $\{S_n\}$ has multiple renewals and $\{T_n\}$ does not (i.e. $F(0)>0$ and $G(0)=0$), $F$ and $G$ are concentrated on a semi-lattice $\{0,\delta,2\delta,\ldots\}$ and either\begin{align}F(x) &= \left(1 - p^{\left\lfloor\frac x\delta\right\rfloor+1}\right)\mathsf 1_{[0,\infty)}(x), \quad 0<p<1\\G(x) &= \mathbb 1_{[\delta,\infty)}(x)\end{align}or\begin{align}F(x) &= \left(1 - p^{\left\lfloor\frac x\delta\right\rfloor+1}\right)\mathsf 1_{[0,\infty)}(x), \quad 0<p<1\\G(x) &= \left(1 - q^{\left\lfloor\frac x\delta\right\rfloor}\right)\mathsf 1_{[0,\infty)}(x), \quad 0<q<1.\\\end{align} (ii) Neither process has multiple renewals, and $F$ and $G$ are exponential and hence $\{S_n\}$, $\{U_n\}$, and their superposition are Poisson processes. The proof is given by Pairs of renewal processes whose superpositionis a renewal process by J.A. Ferreira (2000). Note in particular this means that ordinary renewal processes with strictly positive inter-renewal times, stability under superposition is equivalent to the processes being Poisson.
I'm writing an equation which contains a relatively long subscritps under a summation, which is just next to a bracket. The code is the following $$\mu_j=\E(X_j)=\E\left( \sum_{i\in Pa(X_j)}\lambda_{ij}X_i+W_j\right) =\sum_{i\in Pa(X_j)}\lambda_{ij}\mu_i+v_j.$$ With the commands \left, \right the brackets are too big and also there is too much with space between the beginning of the expression and the end of the summation. I've tried to use commands as \mathclap, \mathrlap together with \Biggl and similar in order to have a better looking equation, but I hadn't any satisfactory result. The problem is also that with these commands the alignment of the whole equation seems to decrease. Have you got any suggestions?
Practice Paper 4 Question 20 The Csatian language has the following two rules: (i) \(B\) is a word, and (ii) if \(x\) is a word then \(BBBxxx\) is a word. Give an expression for the number of \(B\)s in a valid word. Suggest a third rule (iii)such that all multiples of 3 \(B\)s are valid words, and the only other valid word is a single \(B.\) Replace rule (iii)with the following: (iii) if \(BBBBx\) is a word then \(x\) is a word. Find all pairs \((p,q)\) such that, for allintegers \(n\ge0,\) the expression \(pn+q\) gives the lengths of all valid words. Related topics Warm-up Questions Find a closed form solution to the recurrence defined by \(a_0=5, a_{n+1}=2a_n+1.\) The Climblanguage has two rules: (i)it contains the word \(AAAAAAAA,\) and (ii)if \(xx\) is a word, then \(x\) is a word. Write down all the words in the language. Give the possible values of \(b\) in \(3 \cdot 2^i \equiv b \mod 12\) for \(i \in \{0,1,2,3, \ldots\}.\) Hints Hint 1(Part (a)) Find and solve a recurrence relation for the length of possible words. Hint 2(Part (b)) What is the characteristic of the valid words described by rule (ii)? Hint 3(Part (b)) ... in terms of their divisibility? Hint 4(Part (b)) Rules (i)and (ii)only cover the case of a single \(B\) and somemultiples of 3 \(B\)s. How could rule (iii)cover allmultiples of 3 \(B\)s, given that some multiples of 3 \(B\)s are valid words? Hint 5(Part (b)) Could you modify rule (iii)given in part (c) to do so? Note:Sometimes, reading the next part of the question helps. Hint 6(Part (c)) From the expression in part (a), find a new expression for the length of the valid words described by the new rule (iii). Hint 7(Part (c)) If you subtract \(4k\) from the words described by rule (ii), do you get all the naturals? Hint 8(Part (c)) You should notice that by subtracting \(4k\) from the words described by rule (ii), you are not guaranteed to get all naturals (why?). Which of those naturals are not possible? Try expressing them in a more general form. Hint 9(Part (c)) Prove that the new expression you get earlier could not be equal to those naturals. Hint:It is sufficient to show that the expression is not equal to the base case of the naturals (why?). Solution We use the recurrence relation \(l_i=3+3l_{i-1},\) which unwinds to give us:\[ \begin{align} l_i&= 3+ 3(3+3l_{i-2}) \\ &=3+3^2+3^2 \cdot l_{i-2} \\ &=\sum_{k=1}^{i}3^k + 3^{i}\cdot l_0 \\ &=\frac{3(3^{i}-1)}{2}+3^{i} \\ &=\frac{5\cdot3^i-3}{2}. \end{align}\] Factorising \(3\) from \(l_i\) gives \(\frac{5\cdot3^{i-1}-1}{2},\) whose numerator is divisible by \(2,\) which means \(l_i\) is a multiple of \(3.\) Thus, if we want to obtain allmultiples of \(3\) as valid words, we can add a rule which subtracts \(3\) \(B\)s from any valid word: "if \(BBBx\) is a word then \(x\) is a word". You can write \(l'_i=\frac{5\cdot3^i-3}{2}-4n,\) where \(i,n\) are the number of applications of rule (ii)and (iii)respectively. But the question asks you to simplify this. From (a), \(l_i\) is a multiple of \(3,\) but it does not give allmultiples of \(3.\) Hence by subtracting \(4n\) from it, you are not guaranteed to get all naturals, so it is not obvious which of the \(4k,4k+1,\)\(4k+2,4k+3\) forms you can generate. The proof is in showing that \(4k\) and \(4k+3\) are not possible. One approach could be: Show that there do not exist any positive integers \(i,n\) such that \(\frac{5\cdot3^i-3}{2}-4n=4:\) Rearranging the given equation yields \(5 \cdot 3^i - 8n = (2 \cdot 4) + 3 = 11,\) i.e. we require that \(5 \cdot 3^i \equiv 11 \mod 8\). But for \(i = 0,1,2,3,4\) we get \(3^i \equiv 1, 3, 1, 3.\) This pattern repeats because:\[ 3^i \equiv 1 \mod 8 \implies 3^{i+1} \equiv 1 \cdot 3 \equiv3 \mod 8 \\ 3^i \equiv 3 \mod 8 \implies 3^{i+1} \equiv 9 \equiv 1\mod 8 \] Hence, \(3^i \in \{1,3\}\) when taken \(\text{mod }8\). Following that, \(5 \cdot 3^i\) can only be \(5 \cdot 1 \equiv 5\) or \(5 \cdot 3 \equiv 15 \equiv 7\). Hence the equation can never be satisfied. Show that there do not exist any positive integers \(i,n\) such that \(\frac{5\cdot3^i-3}{2}-4n=3:\) Rearranging similarly, the problem becomes \(5\cdot 3^i \equiv 9 \mod 8\). Using a similar argument as above, we get \(5 \cdot 3^i\) is either \(5\) or \(7\) in \(\text{mod }8,\) so the equation can never be satisfied. If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
I've been studying Fourier series and in trying to compute the Fourier series for the function $f: (-\pi,\pi)\to \mathbb{R}$ given by $f(x)=|\sin x|$ I've found something quite strange that I'm not being able to understand what I've done wrong. First, the Fourier series is $$F_{(-\pi,\pi)}[f](x)=\dfrac{a_0}{2}+\sum_{n=1}^\infty a_n \cos nx+b_n \sin nx$$ first we see quite easily that $b_n = 0$ for all $n\in \mathbb{N}$ while $a_0 = 4/\pi$. On the other hand we have $$a_n = \dfrac{1}{\pi} \int_{-\pi}^\pi f(x) \cos nx dx = \dfrac{2}{\pi}\int_0^\pi \sin x \cos nx dx$$ then I've used the fact that $\sin x = \frac{1}{2i}(e^{ix}-e^{-ix})$ and $\cos nx = \frac{1}{2}(e^{inx}+e^{-inx})$ to find out that $$\sin x \cos nx = \dfrac{1}{4i}(e^{ix}-e^{-ix})(e^{inx}+e^{-inx})=\dfrac{1}{2}\left[\sin((n+1)x)-\sin((n-1)x)\right]$$ so that substituting this on the formula for $a_n$ gives $$a_n = \dfrac{-2}{\pi(n^2-1)}[(-1)^n+1]$$ this certainly doesn't work for $n=1$ because we get a zero on the denominator. On the other hand, using the formula for $a_n$ with $n=1$ gives $$a_1 = \dfrac{2}{\pi}\int_0^\pi \sin x \cos x dx = 0.$$ Why the calculation I did doesn't work for $n=1$? I can't find what I've done wrong. Is there something I've missed?
For j=1/2 there are two states: m=+/- 1/2, and for j=3/2 there are 4 different m-states.The "weak" Zeeman effect just refers to a situation where the energy shift due to the magnetic field is small and can be treated with perturbation theory: the unperturbed Hamiltonian has split the l=1 level... It's because the interaction that splits the energies of the state is the spin-orbit coupling, proportional to \vec{L}\cdot\vec{S}, which can be rewritten as being proportional to the difference \vec{J}^2-\vec{L}^2-\vec{S}^2, which is dependent only on the quantum numbers j and l (s=1/2 in... You need more information to prove any of those relations. You must have been given some info about what the b's are supposed to be, for instance. I assumed that you had been told that the b's are fermionic annihilation operators. There are (at least) two sorts of average you can take:If you want to calculate a force averaged over *distance*, you can use the change in kinetic energy, divided by distance: Work =\delta E=F_average * dIf you want to calculate an average over *time*, then that would be given by the change... No, you have to prove U is unitary.Edit: you already seem to know that U being unitary is equivalent to the b's satisfying the same anticommutation relations as the c's. But that's all there is to it.... <\Psi | L_z | \Psi> is the expectation (i.e., average) value of L_z.The probabilities to find specific values m for L_z follow from writing your wavefunction in the form|\Psi>=\sum_m c_m |l,m>as in your special case you only have 1 value of l in the superposition.In this case the... 1. yes, with a small proviso: the gas that leaves the exhaust pipe would be considered as leaving the car, and thus contributing a tiny force in the forward direction!2. yes, apart from the tiny effect mentioned above, the only external forces on the car in the horizontal direction are... The engine plus gasoline plus tires plus dashboard plus windows etc. etc. etc. are all part of "the car". All forces that act internally to the car are parts of action-reaction forces and therefore do not give rise to a net external force. The motion of the car as a whole (more precisely, its...
Let $X$ be a real topological vector space, and let $X^*$ be the dual space to $X.$ Denote the dual pairing by $$\langle \cdot ,\cdot \rangle :X^{*}\times X\to \mathbb {R}.$$ For a function $f:X\to\mathbb{R}\cup \{\pm\infty\}$taking values on the extended real number line, the convex conjugate$$f^{*}:X^{*}\to \mathbb {R} \cup \{-\infty ,+\infty \}$$ is defined in terms of the supremum by $$f^{*}\left(x^{*}\right):=\sup \left\{\left.\left\langle x^{*},x\right\rangle -f\left(x\right)\right|x\in X\right\}.$$ Bachir introduced the notion conjugate $f^\times$ of $f:X\to\mathbb{R}$ as $$f^\times (\phi) := \sup_{x\in X}\{\phi(x) - f(x)\}$$ for all $\phi\in C_b(X),$ the set of all bounded real-valued continuous functions on $X.$ Question:Do we have convex conjugate for vector-valued function? More precisely, if $E$ is a Banach space, can we define convex conjugate of $f:X\to E$ by $$\tilde{f}^\times(\phi) = \sup_{x\in X}\{\|\phi(x)\| - \|f(x)\|\}$$ for all $\phi\in C_b(X,E),$ the set of all bounded $E$-valued continuous functions on $X?$
Finding a Structure’s Best Design with Topology Optimization Think about the first architects who designed a bridge above water. The design process likely included several trials and subsequent failures before they could safely allow people to cross the river. COMSOL Multiphysics and the Optimization Module would have helped make this process much simpler, if they had computers at the time, of course. Before we start to discuss building and optimizing bridges, let’s first identify the best design for a simple beam with the help of topology optimization. A Simple Beam Case In our structural steel beam example, both ends of the beam are on rollers, with an edge load acting on the top of the middle part. The beam’s dimensions are 6 m x 1 m x 0.5 m. In this case, we stay in the linear elastic domain and, due to the dimensions, we can use a 2D plane stress formulation. Note that there is a symmetry axis at x = 3 m. Geometry of the beam with loads and constraints. Choosing the Right Optimization Type to Find a Structure’s Best Design Using the beam geometry depicted above, we want to find the best compromise between the amount of material used and the stiffness of the beam. In order to do that we need to convert this into a mathematically formal language for optimization. Every optimization problem consists of finding the best design vector \alpha, such that the objective function F(\alpha) is minimal. Mathematically, this is written as \displaystyle \min_{\alpha} F(\alpha). The design vector choice defines the type of optimization problem that is being solved: If \alpha is a set of parameters driving the geometry (e.g., length or height), we are talking about parametric optimization. If \alpha controls the exterior curves of the geometry, we are talking about shape optimization. If \alpha is a function determining whether a certain point of the geometry is void or solid, we are talking about topology optimization. Topology optimization is applied when you have no idea of the best design structure. On the one hand, this method is more flexible than others because any shape can be obtained as a result. On the other hand, the result is not always directly feasible. As such, topology optimization is often used in the initial phase, providing guidelines for future design schemes. In practice, we define an artificial density function \rho_{design}(X) , which is between 0 and 1 for each point X = \lbrace x,y \rbrace of the geometry. For a structural mechanics simulation, this function is used to build a penalized Young’s modulus: where E_0 is the true Young’s modulus. Thus, \rho_{design}= 0 corresponds to a void part and \rho_{design}= 1 corresponds to a solid part. As mentioned before, in regards to the objective function, we want to maximize the stiffness of the beam. For structural mechanics problems, maximizing the stiffness is the same as minimizing the compliance. In terms of energy, it is also equivalent to minimizing the total strain energy defined as: Our topology optimization problem is thus written as: Addressing Topology Optimization in COMSOL Multiphysics Now that our optimization problem has been defined, we can set it up in COMSOL Multiphysics. In this blog post, we will not detail the solid mechanics portion of our simulation. There are, however, several tutorials from our Structural Mechanics Module that help showcase this element. When adding the Optimization physics interface, it is possible to define a Control Variable Field on a domain. As a first discretization for \rho_{design}, we can select a constant element order. This means that we will have one value of \rho_{design} through all the mesh elements. After this step is completed, a new Young’s modulus can be defined for the structural mechanics simulation, such as E(X)=\rho_{design} E_0. As referenced above, the objective function is an integration over the domain. In the Optimization interface, we select Integral Objective. The elastic strain energy density is a predefined variable named solid.Ws. Thus, the objective can be easily defined as \int_\Omega Ws \ d\Omega. Our discussion today will not focus on how optimization works in practice. Basically, the optimization solver begins with an initial guess and iterates on the design vector until the function objective has reached its minimum. If we run our optimization problem, we get the results shown below. Results from the initial test. The solution is trivial in order to maximize the stiffness. The optimal solution shows the full amount of the original material! After this initial test, we can conclude that a mass constraint is necessary if we want to make the optimization algorithm select a design. With a constraint of 50 percent, this could be written as: In COMSOL Multiphysics, a mass constraint can be included by adding an Integral Inequality Constraint. Additionally, the initial value for \rho_{design} needs to be set to 0.5 in order to respect this constraint at the initial state. Let’s have a look at the results from this new problem, which are illustrated in the following animation. Results with the addition of a mass constraint. While this result is better, a problem remains: We have many areas with intermediate values for \rho_{design}. For the design, we only need to know if a given area is void or not. In order to get mostly 1 or 0 for the \rho_{design}, the intermediate values must be penalized. To do so, we can add an exponent p in the penalized Young’s modulus expression: In practice, p is taken between 3 and 5. For instance, if p = 5 and \rho_{design}= 0.5, the penalized Young’s modulus will be 0.03125 E_0. The contribution for the mass constraint, meanwhile, will still be 0.5. As such, the optimization algorithm will try lending to 0 or 1 for the design vector. With our new penalized Young’s modulus, we get the following result. Results with the new penalized Young’s modulus. A beam design has started to emerge! There is, however, a problematic checkerboard design, one that seems to be highly dependent upon the chosen mesh. In order to avoid the checkerboard design, we need to control the variations of \rho_{design} in space. One way to estimate variations of a variable field is to compute its derivative norm integrated on the whole domain: A new question arises: How can we minimize both the variation of \rho_{design} and the total strain energy? Since a scalar objective function is necessary, these objectives must be combined. We can think about adding them, but first, the two expressions need to be scaled to get values around 1. Concerning the stiffness objective, we simply divide by Ws0, which is the value of the total strain energy when \rho_{design} is constant. In regards to the regularization term, we can take the following scaling factor \frac{h_0 h_{max}}{A}, where h_{max} is the maximum mesh size, h_0 is the expected size of details in the solution, and A is the area of the design space. Our final optimization problem is now written as: where the factor q controls the regularization weight. Finally, the discretization of \rho_{design} needs to be changed to Lagrange linear elements to enable the computation of its derivative. By solving this final problem, we obtain results that offer helpful insight as to the best design structure for the beam. Results with regularization. Such a design scheme can be seen at different scales in the real world, as illustrated in the bridge below. A warren-type truss bridge. Image in the public domain, via Wikimedia Commons. Designing a Bridge Above Water Now that we have set up our topology optimization method, let’s move on to a slightly more complicated design space. We want to answer the question of how to design a bridge above water. To do so, a road zone in the geometry must be defined where the Young’s modulus is not penalized. Design space for a through-arch bridge. After a few iterations, we obtain a very good result for the through-arch bridge, one that is quite impressive. Such a result could provide architects with a solid understanding of the design that should be used for the bridge. Topology optimization results for a through-arch bridge. While the mathematical optimization algorithm had no guidelines on the particular design scheme, the result depicted above likely brings a real bridge design to mind. The Bayonne Bridge, shown below, is just one example among many others. The Bayonne Bridge. Image in the public domain, via Wikimedia Commons. It is important to note that this topology optimization method can be used in the exact same way for 3D cases. Applying the same bridge design question, the animation below shows a test in 3D for the design of a deck arch bridge. 3D topology optimization for a deck arch bridge. Concluding Thoughts Here, we have described the basics of using the topology optimization method for a structural mechanics analysis. To implement this method on your own, you can download the Topology Optimization of an MBB Beam tutorial from our Application Gallery. While topology optimization may have initially been built for a mechanical design, the penalization method can also be applied to a large range of physics-based analyses in COMSOL Multiphysics. Our Minimizing the Flow Velocity in a Microchannel tutorial, for instance, provides an example of flow optimization. References “Optimal shape design as a material distribution problem”, by M.P. Bendsøe. Topology Optimization: Theory, Methods, and Applications, by M.P. Bendsøe and O. Sigmund. Comments (3) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Say we want to place n items on the real line. Let us denote the position of item i by $p_i$. We have interval constraints on the position $p_i$, i.e. we are given $l_i, r_i$ such that $l_i \le p_i \le r_i$. My problem is: given a specific item s, how do I compute the maximum gap possible to the right of s? By maximum gap to the right of s I mean the distance between s and the next item to its right. Mathematical Description: More formally, given $s \in [n]$, and $(l_i,r_i)$ for $i\in[n]$ I want to find $$f(s) = \max_{p_1,\dots,p_n} p_t - p_s$$ subject to \begin{align*} 1)\,& l_i \le p_i \le r_i\, \forall\,i \in [n] \\ 2)\,& \forall\,i\ne s,t, \text{ either } p_i \le p_s \text{ or }p_i \ge p_t \text{ (there is no item between s and t)} \end{align*} In the absence of the second constraint the problem would have been a linear program. The second constraint makes it difficult. I know how to model this question as an integer program (see https://bit.ly/2Jhf7kJ), but I am interested in an actual algorithm.
I believe it is impossible in general to distinguish questions asked by a graduate student and those asked by a mathematician in a domain in which they are not specialist. Since the latter are allowed on MathOverflow, it would make little sense to disallow the former or relegate them elsewhither. It is hard to argue that MathOverflow receives too many ... I think you ask a quite reasonable question. One certainly could have a more condenseddescription of what is on-topic on MathOverflow, and put this in a prominent place where people see it before asking their first question. In particular one could make more clear what isthe difference to Math.SE. As a draft of such description, one could take e.g. the ... I would just like to point out that there are two types of homework questions: graded homework and ungraded homework. In North America, especially at universities with the kind of budget that permits graders and TAs, there is usually a component of the final grade consisting of graded assignments.But, for example, in parts of Europe and South America, one'... The growth of online tools, such as Math.SE, means that one can no longer base a grade on any work done at home. Quizzes and in-class tests is the way to go.Now I explicitly allow use of any sources, as long as by the quiz/test time students know the material.In upper level undergraduate (or lower level graduate) courses my grading method of choice used ... Can someone figure out what is going on in this question about representation varieties of associative algebras? It looks like the OP may have found an error in a published article of Smalo, but I can't prove that its an error. Okay, here's one that I think is interesting and has remained unanswered for over a month. (It might also test out this system.)Does there exist a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?If it isn't clear from the title, this question (asked by Willie Wong) asks whether the inverse of any ... EDIT: Some more information from Catija: This screen element is not currently customizable, so if we wanted it customized, SE would have to implement a new feature. Here Catija indicates that the current SE thinking is that this particular screen element ought to be something which looks the same across the whole SE network. But here she indicates that on ... In my view, your questions are at a good level for MO. A good indication of this is that (as you implicitly point out) you're getting lots of upvotes but almost no answers on MSE.I like the detail you put into your questions, and I think that will go down well. If you're asking questions as a result of going through textbooks and getting stuck, probably ... If you have posted a question at MSE and want to cross-post it at MO or the other way around, you can do it under the following conditions:Wait several days, not just hours. I would suggest a week.Provide links between the two versions. The MO question should have link to the one at MSE and vice versa. (This is for honesty and avoiding duplicate efforts ... As a data point for our Stack Exchange overlords, I have twice now tried to migrate a question from math.SE to math.SE because I didn't realize it was already there. I'd really like to get a striking difference in color and background. It is doubtful that SE would allow another "mathematical" mathematics site to open.In fact some time ago (2012, so even prior to MO's move to the SE network) there was an Area 51 proposal for a "Postgrad Mathematics" site. The proposal itself has been deleted, but a discussion or two live on. It is highly likely that an SE employee closed it after deeming ... There is no formal connection. You may often see the same question on both sites because a lot of people cross-post (manually), a practice which is generally frowned upon, especially when the same question appears at both sites almost simultaneously and the poster doesn't mention anything about cross-posting or doesn't link to the question at the other site. ... This question asks, given a homogenous degree $n$ polynomial map $f: \mathbb{Z}^n \to \mathbb{Z}$, how can we test if it is a norm map from a number field? I know roughly what the answer must be: Over $\mathbb{Q}^{alg}$, $f$ should factor as $n$ distinct hyperplanes and $f$ should be irreducible over $\mathbb{Q}$. That should make $f: \mathbb{Q}^n \to \... Yes, one should be closed as off-topic, and migrated to the other site, where it can be closed as a duplicate. It's a bit painful that we can't close as a duplicate of a question on another site, but that's the present state of things.I think the majority of cross-posted questions will be off-topic for MO and on-topic on MSE, but not all. I see you're not ... You can flag the post for moderator attention on MSE, choose the free-form flag and ask for a migration to MO.Another option, which is useful when the crickets really chirp, and you haven't got any comments either, is to simply delete the question and post it anew on MO. Personal opinion: I think that the current system of having several of separate sites for different math-related topics (apart from MO and MSE, there are also related ones like scientific computing, math educators, history of science...) is not the best one, and adding new sites doesn't help; it just makes the community more fragmented.Maybe we need tags ... Is there an injective cubic polynomial $\mathbb Z^2\rightarrow\mathbb Z$?This question asks exactly what its title suggests: is there a polynomial of degree $3$ in $\mathbb Z[x,y]$ whose induced map $\mathbb Z^2\rightarrow\mathbb Z$ is injective? The question is trivial for polynomials of degree $4$ (where the answer is yes) or polynomials of degree $2$ or ... Here is a question which seems interesting,What does it take to divide by $2$?In a nutshell, we know that in $\sf ZF$, assuming classical logic, we can prove that if $A\times\{0,1\}$ and $B\times\{0,1\}$ are equipotent, then $|A|=|B|$.How far can this bend before it breaks? Can we do the same in intuitionistic set theories?(Minor bump: the question ... My memory of what happened is this: I was pinged to come over to the moderator chat room and was asked if we "wanted" three MSE posts to be migrated. I said I wasn't sure about the post in question, meaning I wasn't sure whether actual experts would keep it open if it were migrated -- the MSE moderator took that as a "no". I told the other MO moderators what ... The reddit website has several forums dedicated to mathematics.The main math forum has a large audience but there are several others subs such as /r/learnmath, /r/homeworkhelp, /r/cheatatmathhomework, /r/askmath, /r/MathHelpwhich are somehow more "finegrained" than mathoverflow/mathstackexchange.See this link for an extensive list of subreddits dealing ... Does the open mapping theorem imply the Baire category theorem?The open mapping theorem and the Baire category theorem are both theorems of ZFC, but are independent of ZF. Working in ZF, one can (apparently) prove that BCT implies OMT. Does the converse hold in ZF?This is currently the highest-voted question with the functional-analysis tag. The reason to wait a week or so before crossposting is to avoid having someone spend a lot of time trying to answer on one site, without realizing that it's already answered on the other. After a week, it's less likely that anyone is actively working on it.Additionally, you must edit every copy of the question to include a link to every other copy.... I suggest that we leave the job to Stack Exchange's design team. I'm not sure why we are supposed to come up with a solution ourselves; we are not professionals, and surely SE has someone on staff whose job is coming up with good site designs. I like many of their designs that were on other sites in the network before this change.What worries me, though, ... I'm the author of the question, and I would not deem it rude to migrate it back to MSE. Actually, I was considering a way to put it back, but since I could not publish it again on MO, I felt blocked. So I'm going to flag it to migrate it back to MSE.
These exercises are not tied to a specific programming language. Example implementations are provided under the Code tab, but the Exercises can be implemented in whatever platform you wish to use (e.g., Excel, Python, MATLAB, etc.). ###Exercise 1: Forces acting on a charged particle and the equations of motion Imagine that we have a particle of mass $m$, charge $q$, and velocity $\vec{v} = (v_x,v_y,v_z)$ with $v_z >> v_x, v_y$ entering a region of uniform magnetic field $\vec{B} = B_x \hat{x}$ and uniform electric field $\vec{E} = E_y \hat{y}$. As shown below, $B_x > 0$ and $E_y < 0$. The length of the field region along the $z$-axis is $L$. ![Alt Figure](images/ExB_Filter/ExB_Filter_geometry.png) Neglecting any other forces (e.g., gravitational forces), show that the Cartesian components of the combined electric and magnetic forces are $$ F_x = 0 $$ $$ F_y = qE_y + qv_zB_x $$ $$ F_z = - qv_yB_x $$ resulting in the equations of motion $$ \ddot{x} = 0 $$ $$ \ddot{y} = {{q}\over{m}}\Bigl(E_y + v_zB_x\Bigr) $$ $$ \ddot{z} = - {{q}\over{m}}v_yB_x $$ where the dot accents indicate differentiation with respect to time. Note that the particle will not experience any transverse acceleration if $v_z = v_{pass} = -E_y/B_x$. (a) Assume that $E_y = -105$ V/m and $B_x = 2.00\times 10^{-3}$ T, and that all other field components are zero. Calculate, by hand, the Cartesian components of the acceleration at the instant when a Li$^+$ ion of mass 7 amu and kinetic energy 100 eV enters the field region traveling along the direction $\hat{u} = (\hat{x} + \hat{y} + 100\hat{z})/\sqrt{10002}$. How will these acceleration components compare to those for a doubly ionized nitrogen ion (N$^+$)? (b) Write a code to perform the calculation in part (a). Note that, as soon as the particle enters the field region, the velocity components will change, and therefore so will the forces. To calculate an accurate trajectory, it is necessary to repeatedly calculate the forces, a task for which the computer is very well suited. (c) What do you expect the trajectory of this ion to look like as it traverses the field region? Explain your answer. ###Exercise 2: Computing the Trajectory Solve the equations of motion to obtain the trajectory of the Li$^+$ ion from Exercise 1 while it traverses the field region from $z = 0$ to $z = L = 0.25$ m. (a) On separate graphs, plot $x$, $y$, and $z$ versus time. (b) Plot the trajectory in space. What does the trajectory of the ion look like? What did you expect (Exercise 1)? What happens if you reduce the initial kinetic energy of the ion by a factor of 100? A factor of 10,000? (c) What is the kinetic energy of the ion at the end of its trajectory? How does it compare to its initial energy? ###Exercise 3: The $\vec{E} \times \vec{B}$ (Wien) Filter, Part 1 Regions of mutually perpendicular electric and magnetic fields can be used to filter a collection of moving charged particles according to their velocity. If we assume that a particle of velocity $v_{pass} = -E_y/B_x$ enters the field region traveling *exactly* along the $z$-axis, the particle will experience zero net force and therefore zero acceleration and zero deflection from the $z$-axis. If a small, circular aperture of radius $R$ is placed on the $z$-axis at $z = L$, then this particle will be transmitted through the aperture. (a) Use your program from Exercise 2 to determine the maximum value of $v_{z,max} = v_{pass} + \Delta v$ for which an aperture of radius $R = 1.0$ mm will transmit the Li$^+$ ion (now assuming that $v_x = v_y = 0$). What is the value of $\Delta v/v_{pass}$? (b) Repeat (a) for an aperture of radius $R = 2.0$ mm. What is the value of $\Delta v/v_{pass}$? ###Exercise 4: The $\vec{E} \times \vec{B}$ (Wien) Filter, Part 2 As an extension of Exercise 3, now assume that the particles entering the field region at the origin have a normal distribution of velocities directed purely along the $z$-axis. The center of the distribution is $v_{z,pass}$ and its width is $0.1v_{z,pass}$. (a) Allow $40,000$ particles from this distribution to enter the field region at the origin. What is the resulting histogram of the scaled velocities $v_z/v_{z,pass}$ of the particles transmitted through a circular aperture of radius $R=1.0$ mm centered on the $z$-axis? How does it compare to the histogram of the initial velocities? (b) Repeat part (a) for an aperture of radius $R=0.5$ mm. It is worth noting that an actual source of ions will not only be characterized by a distribution of velocities, but also distribution of directions (no ion beam is strictly mono-directional, just like a laser beam is not strictly mono-directional). This is an additional fact that would have to be considered to accurately simulate the performance of a real Wien filter.
bnelo12 writes (slightly paraphrased) Can you explain exactly how $latex {1 + 2 + 3 + 4 + \ldots = – \frac{1}{12}}$ in the context of the Riemann $latex {\zeta}$ function? We are going to approach this problem through a related problem that is easier to understand at first. Many are familiar with summing geometric series $latex \displaystyle g(r) = 1 + r + r^2 + r^3 + \ldots = \frac{1}{1-r}, $ which makes sense as long as $latex {|r| < 1}$. But if you’re not, let’s see how we do that. Let $latex {S(n)}$ denote the sum of the terms up to $latex {r^n}$, so that $latex \displaystyle S(n) = 1 + r + r^2 + \ldots + r^n. $ Then for a finite $latex {n}$, $latex {S(n)}$ makes complete sense. It’s just a sum of a few numbers. What if we multiply $latex {S(n)}$ by $latex {r}$? Then we get $latex \displaystyle rS(n) = r + r^2 + \ldots + r^n + r^{n+1}. $ Notice how similar this is to $latex {S(n)}$. It’s very similar, but missing the first term and containing an extra last term. If we subtract them, we get $latex \displaystyle S(n) – rS(n) = 1 – r^{n+1}, $ which is a very simple expression. But we can factor out the $latex {S(n)}$ on the left and solve for it. In total, we get $latex \displaystyle S(n) = \frac{1 – r^{n+1}}{1 – r}. \ \ \ \ \ (1)$ This works for any natural number $latex {n}$. What if we let $latex {n}$ get arbitrarily large? Then if $latex {|r|<1}$, then $latex {|r|^{n+1} \rightarrow 0}$, and so we get that the sum of the geometric series is $latex \displaystyle g(r) = 1 + r + r^2 + r^3 + \ldots = \frac{1}{1-r}. $ But this looks like it makes sense for almost any $latex {r}$, in that we can plug in any value for $latex {r}$ that we want on the right and get a number, unless $latex {r = 1}$. In this sense, we might say that $latex {\frac{1}{1-r}}$ extends the geometric series $latex {g(r)}$, in that whenever $latex {|r|<1}$, the geometric series $latex {g(r)}$ agrees with this function. But this function makes sense in a larger domain then $latex {g(r)}$. People find it convenient to abuse notation slightly and call the new function $latex {\frac{1}{1-r} = g(r)}$, (i.e. use the same notation for the extension) because any time you might want to plug in $latex {r}$ when $latex {|r|<1}$, you still get the same value. But really, it’s not true that $latex {\frac{1}{1-r} = g(r)}$, since the domain on the left is bigger than the domain on the right. This can be confusing. It’s things like this that cause people to say that $latex \displaystyle 1 + 2 + 4 + 8 + 16 + \ldots = \frac{1}{1-2} = -1, $ simply because $latex {g(2) = -1}$. This is conflating two different ideas together. What this means is that the function that extends the geometric series takes the value $latex {-1}$ when $latex {r = 2}$. But this has nothing to do with actually summing up the $latex {2}$ powers at all. So it is with the $latex {\zeta}$ function. Even though the $latex {\zeta}$ function only makes sense at first when $latex {\text{Re}(s) > 1}$, people have extended it for almost all $latex {s}$ in the complex plane. It just so happens that the great functional equation for the Riemann $latex {\zeta}$ function that relates the right and left half planes (across the line $latex {\text{Re}(s) = \frac{1}{2}}$) is $latex \displaystyle \pi^{\frac{-s}{2}}\Gamma\left( \frac{s}{2} \right) \zeta(s) = \pi^{\frac{s-1}{2}}\Gamma\left( \frac{1-s}{2} \right) \zeta(1-s), \ \ \ \ \ (2)$ where $latex {\Gamma}$ is the gamma function, a sort of generalization of the factorial function. If we solve for $latex {\zeta(1-s)}$, then we get $latex \displaystyle \zeta(1-s) = \frac{\pi^{\frac{-s}{2}}\Gamma\left( \frac{s}{2} \right) \zeta(s)}{\pi^{\frac{s-1}{2}}\Gamma\left( \frac{1-s}{2} \right)}. $ If we stick in $latex {s = 2}$, we get $latex \displaystyle \zeta(-1) = \frac{\pi^{-1}\Gamma(1) \zeta(2)}{\pi^{\frac{-1}{2}}\Gamma\left( \frac{-1}{2} \right)}. $ We happen to know that $latex {\zeta(2) = \frac{\pi^2}{6}}$ (this is called Basel’s problem) and that $latex {\Gamma(\frac{1}{2}) = \sqrt \pi}$. We also happen to know that in general, $latex {\Gamma(t+1) = t\Gamma(t)}$ (it is partially in this sense that the $latex {\Gamma}$ function generalizes the factorial function), so that $latex {\Gamma(\frac{1}{2}) = \frac{-1}{2} \Gamma(\frac{-1}{2})}$, or rather that $latex {\Gamma(\frac{-1}{2}) = -2 \sqrt \pi.}$ Finally, $latex {\Gamma(1) = 1}$ (on integers, it agrees with the one-lower factorial). Putting these together, we get that $latex \displaystyle \zeta(-1) = \frac{\pi^2/6}{-2\pi^2} = \frac{-1}{12}, $ which is what we wanted to show. $latex {\diamondsuit}$ The information I quoted about the Gamma function and the zeta function’s functional equation can be found on Wikipedia or any introductory book on analytic number theory. Evaluating $latex {\zeta(2)}$ is a classic problem that has been in many ways, but is most often taught in a first course on complex analysis or as a clever iterated integral problem (you can prove it with Fubini’s theorem). Evaluating $latex {\Gamma(\frac{1}{2})}$ is rarely done and is sort of a trick, usually done with Fourier analysis. As usual, I have also created a paper version. You can find that here.
We know that $LCM(a,b)= [\frac{ab}{\ GCD(a,b)}]$. What about $LCM (a,b,c)$? Can anyone help us because our instructors doesn't know the ways and she just lay the problem on us. Thanks. For 3 numbers, we have the relation $$ LCM(a,b,c) = \frac{abc}{GCD(ab,bc,ca)} $$ The proof is as follows: \begin{align} a | n \text{ and } b | n \text{ and } c|n & \iff abc | nab \text{ and } abc | nbc \text{ and } abc | nca\\ & \iff abc | GCD(nab,nbc,nca)\\ & \iff \frac{abc}{GCD(ab,bc,ca)} | n \end{align} By considering prime factorizations, the statement $\text{lcm}(a,b,c)=\text{lcm}(\text{lcm}(a,b),c)$ reduces to: $$ \max\{x,y,z\}=\max\{\max\{x,y\},z\}\qquad(\star) $$ We prove this in two steps. First of all, it is clear that $\max\{x,y\}\leq \max\{x,y,z\}$ and $z\leq \max\{x,y,z\}$. Thus $$ \max\{x,y,z\}\geq\max\{\max\{x,y\},z\}. $$ On the other hand, we observe that either $\max\{x,y,z\}=\max\{x,y\}$ or $\max\{x,y,z\}=z$. Hence in either case, $$ \max\{x,y,z\}\leq\max\{\max\{x,y\},z\}. $$ Thus we've shown both inequalities, so equality $(\star)$ follows. If you don't see why the statement $\max\{\max\{x,y\},z\}$ implies the $\text{lcm}$ statement, write out the prime factorizations of $a,b,c$. Fix any prime $p$, and compare the exponents appearing in $a,b,c$. Then the $\text{lcm}$ operation corresponds to taking the maximum of the exponents appearing in $a,b,c$ for the prime $p$. For me, the easiest applied way (which can also easily be programmed in computer) to find it is using the prime factorization: $$a_1,...,a_k\in\Bbb N\;,\;\;a_i=\prod_{m=1}^{n_i} p_{im}^{\beta_{im}}\;\;,\;p_{im}\;\;\text{primes,}\;,\;\;\beta_{im}\in\Bbb N$$ then $$l.c.m.(a_1,...,a_n)=\prod_{i,m}p_{im}^{\max\limits_i(\beta_{i1},...,\beta_{ik})}$$ For example, if we have $\;15\;,\;\;72\;,\;\;32\;$ , then $$\begin{cases}15=3\cdot5\\72=2^3\cdot3^2\\32=2^5\end{cases}\implies\;l.c.m.(15,72,32)=2^5\cdot3^2\cdot5$$
Answer $$-\sin35^\circ=\sin(-35^\circ)$$ 20 goes with A. Work Step by Step $$-\sin35^\circ$$ Recall the negative-angle identities for sines: $$\sin(-\theta)=-\sin\theta$$ Therefore, we can rewrite $-\sin35^\circ$ as follows: $$-\sin35^\circ=\sin(-35^\circ)$$ Looking at the choices, we see that 20 is matched with A.
We can use the equation of a curve in polar coordinates to compute some areas bounded by such curves. The basic approach is the same as with any application of integration: find an approximation that approaches the true value. For areas in rectangular coordinates, we approximated the region using rectangles; in polar coordinates, we use sectors of circles, as depicted in figure 12.3.1. Recall that the area of a sector of a circle is $\ds \alpha r^2/2$, where $\alpha$ is the angle subtended by the sector. If the curve is given by $r=f(\theta)$, and the angle subtended by a small sector is $\Delta\theta$, the area is $\ds (\Delta\theta)(f(\theta))^2/2$. Thus we approximate the total area as $$\sum_{i=0}^{n-1} {1\over 2} f(\theta_i)^2\;\Delta\theta.$$ In the limit this becomes $$\int_a^b {1\over 2} f(\theta)^2\;d\theta.$$ Example 12.3.1 We find the area inside the cardioid $r=1+\cos\theta$. $$\int_0^{2\pi}{1\over 2} (1+\cos\theta)^2\;d\theta= {1\over 2}\int_0^{2\pi} 1+2\cos\theta+\cos^2\theta\;d\theta= {1\over 2}\left.(\theta +2\sin\theta+ {\theta\over2}+{\sin2\theta\over4})\right|_0^{2\pi}={3\pi\over2}.$$ Example 12.3.2 We find the area between the circles $r=2$ and $r=4\sin\theta$, as shown in figure 12.3.2. The two curves intersect where $2=4\sin\theta$, or $\sin\theta=1/2$, so $\theta=\pi/6$ or $5\pi/6$. The area we want is then $$ {1\over2}\int_{\pi/6}^{5\pi/6} 16\sin^2\theta-4\;d\theta={4\over3}\pi + 2\sqrt{3}. $$ This example makes the process appear more straightforward than it is. Because points have many different representations in polar coordinates, it is not always so easy to identify points of intersection. Example 12.3.3 We find the shaded area in the first graph of figure 12.3.3 as the difference of the other two shaded areas. The cardioid is $r=1+\sin\theta$ and the circle is $r=3\sin\theta$. We attempt to find points of intersection: $$\eqalign{ 1+\sin\theta&=3\sin\theta\cr 1&=2\sin\theta\cr 1/2&=\sin\theta.\cr} $$ This has solutions $\theta=\pi/6$ and $5\pi/6$; $\pi/6$ corresponds to the intersection in the first quadrant that we need. Note that no solution of this equation corresponds to the intersection point at the origin, but fortunately that one is obvious. The cardioid goes through the origin when $\theta=-\pi/2$; the circle goes through the origin at multiples of $\pi$, starting with $0$. Now the larger region has area $$ {1\over2}\int_{-\pi/2}^{\pi/6} (1+\sin\theta)^2\;d\theta= {\pi\over2}-{9\over16}\sqrt{3} $$ and the smaller has area $$ {1\over2}\int_{0}^{\pi/6} (3\sin\theta)^2\;d\theta= {3\pi\over8} - {9\over16}\sqrt{3} $$ so the area we seek is $\pi/8$. Exercises 12.3 Find the area enclosed by the curve. Ex 12.3.1$\ds r=\sqrt{\sin\theta}$(answer) Ex 12.3.2$\ds r=2+\cos\theta$(answer) Ex 12.3.3$\ds r=\sec\theta, \pi/6\le\theta\le\pi/3$(answer) Ex 12.3.4$\ds r=\cos\theta, 0\le\theta\le\pi/3$(answer) Ex 12.3.5$\ds r=2a\cos\theta, a>0$(answer) Ex 12.3.6$\ds r=4+3\sin\theta$(answer) Ex 12.3.7Find the area inside the loop formed by$\ds r=\tan(\theta/2)$.(answer) Ex 12.3.8Find the area inside one loop of $\ds r=\cos(3\theta)$.(answer) Ex 12.3.9Find the area inside one loop of $\ds r=\sin^2\theta$.(answer) Ex 12.3.10Find the area inside the small loop of $\ds r=(1/2)+\cos\theta$.(answer) Ex 12.3.11Find the area inside $\ds r=(1/2)+\cos\theta$, including thearea inside the small loop.(answer) Ex 12.3.12Find the area inside one loop of $\ds r^2=\cos(2\theta)$.(answer) Ex 12.3.13Find the area enclosed by $r=\tan\theta$ and $\ds r={\csc\theta\over\sqrt2}$.(answer) Ex 12.3.14Find the area inside $r=2\cos\theta$ and outside$r=1$.(answer) Ex 12.3.15Find the area inside $r=2\sin\theta$ and abovethe line $r=(3/2)\csc\theta$.(answer) Ex 12.3.16Find the area inside $r=\theta$, $0\le\theta\le2\pi$.(answer) Ex 12.3.17Find the area inside $\ds r=\sqrt{\theta}$, $0\le\theta\le2\pi$.(answer) Ex 12.3.18Find the area inside both $\ds r=\sqrt3\cos\theta$ and$r=\sin\theta$.(answer) Ex 12.3.19Find the area inside both $r=1-\cos\theta$and $r=\cos\theta$.(answer) Ex 12.3.20The center of a circle of radius 1 is on the circumference of a circle of radius 2. Find the area of the region inside both circles.(answer)
GMATPrepNow wrote: At a certain school, the student to teacher ratio is 52 to 9. If 38 students and 11 teachers leave, which of the following COULD represent the number of students and teachers remaining at the school? A) 532 students and 88 teachers B) 794 students and 162 teachers C) 1106 students and 225 teachers D) 1418 students and 241 teachers E) 1728 students and 295 teachers \(?\,\,\,:\,\,\,\left( {{\text{final}}\,\,S,{\text{final}}\,\,T} \right)\,\,\,\,\,\underline {{\text{possible}}}\) \(\left\{ \matrix{ S\,\,:\,\,\,\,52k\,\,\,\,\, \to \,\,\,\,\,52k - 38 \hfill \cr T\,\,:\,\,\,\,9k\,\,\,\,\, \to \,\,\,\,\,9k - 11 \hfill \cr} \right.\,\,\,\,\,\left( {k \ge 1\,\,{\mathop{\rm int}} } \right)\,\,\,\,\, \Rightarrow \,\,\,\,\,{\rm{final}}\,\,{\rm{sum}}\,\,{\rm{ = }}\,\,61k - 49\,\,\,\,\, \Rightarrow \,\,\,\,\,\left( {{\rm{final}}\,\,{\rm{sum}} + 49} \right)\,\,{\rm{divisible}}\,\,{\rm{by}}\,\,61\,\,\,\,\left( * \right)\) \(\eqalign{ & \left( A \right)\,\,532 + 88 + 49 = 669 = 610 + 59\,\,\,\, \Rightarrow \,\,\,\,{\rm{no}}! \cr & \left( B \right)\,\,794 + 162 + 49 = 1005 = 1220 - 15\,\,\,\, \Rightarrow \,\,\,\,{\rm{no}}! \cr & \left( C \right)\,\,1106 + 225 + 49 = 1380 = 1220 + 160\,\,\,\, \Rightarrow \,\,\,\,{\rm{no}}! \cr & \left( D \right)\,\,1418 + 241 + 49 = 1708 = 1220 + 488 = 1220 + 61 \cdot 8\,\,\,\, \Rightarrow \,\,\,\,{\rm{survivor}}! \cr & \left( E \right)\,\,1728 + 295 + 49 = 2072 = 2440 - 368 = 2440 - 366 - 2\,\,\,\, \Rightarrow \,\,\,\,{\rm{no}}! \cr}\) There is only one survivor, meaning the property (*) is good enough for our purposes! The correct answer is (D). Regards, Fabio. _________________ Fabio Skilnik :: GMATH method creator (Math for the GMAT) Our high-level "quant" preparation starts here: https://gmath.net
So in my Linear Algebra course I was shown that we cannot directly use row reduction to invert a matrix over a commutative ring in general because the algorithm requires elements to be invertible (which is not always the case in a ring). Inverting a matrix with the formula $A^{-1} = det(A)^{-1}C^T$ is guaranteed to work in every case. However, it was also shown that in some rings we might still use row reduction in a way. For example, we can treat any matrix over $\mathbb{Z}$ as a matrix over $\mathbb{R}$ and invert it with row reduction - if the computed inverse only has elements in $\mathbb{Z}$, the matrix is invertible. Knowing that I tried inverting matrices over $\mathbb{Z}/n\mathbb{Z}$ in a similar way. We can compute the inverse of $\begin{pmatrix}1 & 2 \\ 3 & 5\end{pmatrix}$ in $\mathbb{Z}/12\mathbb{Z}$ using the determinant and get $\begin{pmatrix}7 & 2 \\ 3 & 11\end{pmatrix}$. Inverting the same matrix over $\mathbb{R}$ yields $\begin{pmatrix}-5 & 2 \\ 3 & -1\end{pmatrix}$, which is the equivalent matrix modulo $12$, so this seems to work. In another case (matrix in $\mathbb{F}_7$), I was initially disheartened by seeing terms like $\frac{3}{2}$ in the inverse, but then my tutor reminded me that $\frac{3}{2}$ is really just $3\cdot2^{-1}$ which in $\mathbb{F}_7$ is equal to $3\cdot 4 = 12 = 5$. My question is whether this is guaranteed to work in general. That is, will inverting a matrix in $\mathbb{R}$ and then taking inverses and modulos appropriately be successful exactly if the matrix is invertible in $\mathbb{Z}/n\mathbb{Z}$? It is clear that this is the case if $n$ is prime, because then $\mathbb{Z}/n\mathbb{Z}$ is a field, but what about the other cases? Is it possible that there is such a matrix that is invertible but whose inverse cannot be found by this method? Or conversely, that if such an "inverse" were to be found, it would not actually be an inverse of the matrix? (As a related question: Are there rings over which matrices can only be inverted by using the determinant? Matrices over polynomials maybe? At least I haven't seen a technique for inverting these matrices with row reduction. But OTOH, I guess most of those matrices are probably not invertible anyway, since it would require the determinant to be of degree $0$).
My question is: Has anyone constructed an $(\infty,2)$-category whose objects are (projective, maybe smooth, ...) varieties, and where the 1-morphisms from $X$ to $Y$ are given by $D^b_\infty\text{Coh}(X \times Y)$ (an $(\infty,1)$-enhancement of $D^b\text{Coh}(X \times Y)$)? By "$(\infty,1)$-enhancement" I mean some $(\infty,1)$-category whose homotopy category is $D^b\text{Coh}$. I would hope that binary composition would descend to the functor $$D^b\text{Coh}(Y\times Z) \times D^b\text{Coh}(X \times Y) \to D^b\text{Coh}(X \times Z)$$ that sends $(Q,P)$ to $\pi_{02,*}(\pi_{01}^*P \otimes \pi_{12}^*Q)$ a la the Fourier--Mukai transform (where tensor and the projections are the derived functors). If the answer is "yes", I wonder whether this is special to derived categories of coherent sheaves or whether it's a more general algebraic result about collections of $(\infty,1)$-categories of a certain kind? I'm new to infinity infinity stuff and I might have misused a technical term, so it might be best not to interpret my words too literally.
Basic Brilliance is the degree of "brightness" of a gemstone, resulting from light reflected from and refracted off the crown and pavilion facets. Brilliance is an overall visual perception based on the physical laws of refraction and reflection. Figure \(\PageIndex{1}\): Total Internal Reflection in a Diamond. Light reaching the facet at an angle larger than the critical angle will be reflected. When light hits the surface of a transparent optically denser medium (like a gemstone) it will be either reflected from the surface (bounced back) or partially refracted (bent and sent in another direction) inside the stone. When a light ray is refracted, it hits a pavilion facet and either is refracted out of the stone or reflected inside the stone depending on which angle the light ray approaches the facet (Fig.1, see its explanation below). Every stone has an angle at which the light ray will either be refracted (sent along a different path) or reflected (bounced back). This angle is known as the critical angle and depends on the refractive index of the stone (how much the light ray is bent). The higher the refractive index, the smaller the critical angle. When a light ray approaches a pavilion facet at an angle larger than the critical angle, it will be completely reflected inside the stone. The opposite dictates that light which falls at an angle smaller than the critical angle will be refracted outside the stone. This property is very important in the fashioning of a gemstone to create brilliance or "life". As every gemstone has its own critical angle, the design of the cut needs to be adjusted for the stone at hand. Explanation of Figure \(\PageIndex{1}\): In this example, a brilliant cut Diamond is shown, having a critical angle ("ca") of approximately 24°26' (24 degrees and 26 minutes). The path the light travels (the blue line) is as follows: light reaches the crown of the stone and is refracted (bent) inside the stone, bending towards point number 2 the light ray reaches a pavilion facet at an angle larger than the critical angle (ca), so it will be reflected (bounced) towards 3 at point 3, the light ray again reaches the pavilion facet at an angle larger than the critical angle so will be reflected towards 4 here, the light ray reaches the crown at an angle which is smaller than the ca, so it will be refracted out of the stone Note The critical angle is measured from an imaginary line named the normal (NO). This line is drawn at 90° to the surface of the facet at the point where the light reaches that facet. In a well proportioned transparent stone, all light that enters the faceted stone through the crown will be trapped inside the stone for a while and then be refracted out of the stone through the crown. This behavior is known as Total Internal Reflection (often abbreviated as TIR) and it is the key ingredient in the design of a refractometer. It should be noted that this unique phenomenon only occurs on the boundaries of an optically denser medium (gemstone) and an optically rarer medium (such as air) when light travels inside the denser medium. When a transparent stone is poorly cut (with either a too shallow or too deep pavilion), light will leave through the pavilion. Light bleeding through the pavilion facets causes a stone to appear either too light or too dark. Brilliance relies much on transparency, therefore stones with good transparency will show better brilliance when cut well. Of course, in colored gemstones, other factors such as color zoning, pleochroism, etc also play a role in deciding how a stone needs to be cut for best optical performance. Advanced \[critical\ angle = \arcsin \left (\frac {1}{n}\right )\ \Rightarrow \ critical\ angle = \arcsin \left ( \frac {1}{2.417}\right )\ \approx\ 24^\circ26'\] Figure \(\PageIndex{2}\): Relationship between critical angle and index of refraction The critical angle can be calculated as the inverse Sine of 1 divided by n (the refractive index) of a gemstone. For Diamond with n = 2.417, the calculation will be the inverse sine of 1/2.417 (where 1 is the refractive index of air). Note: the actual formula is arcsin(n2 / n1), but as we gemologists usually are only concerned with the critical angle between air and the gem, n2 = 1. From the critical angle formula, the relationship between the refractive index of a gemstone and the critical angle becomes apparent. The higher the refractive index, the smaller the critical angle.
I'm not particularly fluent with LaTeX, but I need to generate a bunch of SVG files given the raw input LaTeX code. One way I found to do that was by using a Python script called "latex2svg", located here: https://github.com/tuxu/latex2svg. I am able to do this successfully in Linux, but not in Windows, and I don't understand why. In Ubuntu, if I run: sudo apt-get install texlive-full to install LaTeX, and then create the following Python script (located in the same directory as latex2svg.py): from latex2svg import latex2svgmyeq1 = r'e^{i\pi}+1=0'myeq2 = r'\mathbb{Q}'myeq3 = r'\int_{-\infty}^{\infty}{\frac{e^{\frac{-x^2}{2}}}{\sqrt{2\pi}}}\ dx=1'eqs = [myeq1, myeq2, myeq3]for ii, eq in enumerate(eqs): myeq = r'\( ' + eq + r' \)' out = latex2svg(myeq) with open('out{}.svg'.format(ii), 'w') as f: f.write(out['svg']) Everything works perfectly, and it generates the SVG files correctly (see them here): However, on Windows 10, with MikTeX installed, if I run that exact same Python script, I get a warning Warning: libgs not found, but the script continues and outputs some half-broken SVGs (see them here). I tried to install libgs with pip, but I get this horrible error and have no idea what to do about it. What do I need to do to get this to work on Windows? Although I have it working on Ubuntu, I would like to understand the issue to get it working in Windows.
580 0 1. Homework Statement Consider the following situation: http://img206.imageshack.us/img206/9739/fbdow8.th.jpg [Broken] The height of the incline is 'h'. a) Find the net acceleration of the centre of mass of the combined system (of the incline and the block) relative to ground. b) Find the speed of the incline when the block slides down to the bottom of the incline. [tex]g=10 m s^{-2}[/tex] And all the surfaces are assumed to be frictionless. And all the surfaces are assumed to be frictionless. 3. The Attempt at a Solution First I would list all the external forces acting upon the system. 1. Mg on the incline due to earth's pull. 2. mg on the block due to earth's pull. 3. The normal provided by the ground to the Incline (M+m)g As these forces cancel out to zero, the accleeration of th centre of mass of the mentioned system is zero as there is no net external force on the system. As the block falls down the incline the incline will also move in order to conserve the momentum (which is zero). The velocity of the block as it reaches the bottom is = [tex]\sqrt{2gh}[/tex] relative to the incline and along the incline. The velocity can be written as [tex]\sqrt{2gh} cos \theta \hat{i}+\sqrt{2gh} sin \theta \hat{j}[/tex] relative to the incline. On writing the velocity relative to ground, I get=[tex]\sqrt{2gh} cos \theta - V \hat{i}+\sqrt{2gh} sin \theta \hat{j}[/tex] Let V be the velocity of the incline in X direction. Conserving momentum [tex]m (\sqrt{2gh} cos \theta - V \hat{i}+\sqrt{2gh} sin \theta \hat{j})+MV\hat{i}=0[/tex] I am confuse now, where does the j component come from. How will I conserve momentum???? Last edited by a moderator:
This notion is rarely discussed in Quantopian forums, yet, it can have some dramatic effects when carried out in an automated stock trading strategy. A stopping time in a stochastic process is when a predetermined value or limit is reached for the first time. Say you buy some stock and set a one-sigma exit from your entry price. That will be a random-like stopping time. In a timed-out exit, you would know when it would take place. Whereas when using a price target, for instance, you would not know when but you would know at what price the target would be reached. The exit will be executed using either method at the time or price specified and signal the end of the ongoing trade. In automated trading, the first stopping time should be considered a little short-sighted, and at times a lot. You are interested in the profit generated by your target move. And, getting out of your trade before reaching that target should be viewed as underscoring since your program would break its stopping time even though you would have profited from that early exit. But, that, in general, is less productive than another alternative. The real problem I see is that most often the first stopping time is not the last. Your trading program should be looking for the last stopping time, or at least, try getting closer to it. You must have noticed that when you set a price target and it is hit, the price, on average, keeps on going up, but without you. This says that your price target, even if it made you money, was not the best price target. It could have been higher since most of the time your original price target will be exceeded, and all you would have had to do was to wait some more. What could be done to improve performance would be to move your original price target up as the stock price is approaching it. It goes on the premise that there will be more than just one higher price above your target. In fact, you might have a series of higher highs following your exit. But, if you were not there, there is no way you could profit from them. In a lot of trading strategies on Quantopian, it is what I see. The equivalent of exiting on the first stopping time reached. And even more often below it. The below target thing is done as a side effect to the trading methodology used, the same as for the first stopping time. We can express a price series as follows: $p(t) = p_0 + \sum_1^T \Delta p \,$ where $\sum_1^T \Delta p \,$ is the sum of all price variations after its initial price $p_0$ up to termination time $T$. The expression could be used for any trade $n$: $\;p_n(t) \cdot q_{0,\,n} = q_{0,\,n} \cdot (p_{0,\,n} + \sum_1^t \Delta p_n) $ where is somewhere between entry and exit $0 < t \le T $. And the stopping time is the exit of that n$^{th}$ trade. What we are trying to predict is: $\Delta p = p_{t + \tau} - p_t $ representing a time-limited segment of the price series for a particular trade. Here $\tau$ represents a random time variable of undetermined length. Depending on how we slice and dice a price series, it is by adding all the pieces that we get our final result. In a timed-out exit, $\tau$ will have a fixed value (say a fixed number of days), whereas, in a price target scenario, $\tau$ will be a quasi-random variable of still undetermined length. We could see trades as having a first hitting time at $t_0 $ following whatever code triggered its entry. And some time later, a stopping time $t_0 + \tau $ as we exit the trade. This makes it a critical component of any trading system. But, and this is a serious but, we are less than good at determining those hitting times and stopping times, especially if we trade in bulk like a hundred or more stocks at a time. Answer the question: who many times in your last 1000 trades have you exited a trade at its highest (long) or lowest (short) price with a profit? If you did this exercise, you might find out that only a small number of trades would have qualified out of that 1000. And since you know that you will be wrong most of the time, why not push those hitting and stopping times further out of reach in order to profit even more from your trading decisions? Some wonder, apparently not many, how could such high CAGR as presented in the first post be possible? The answer is relatively simple. It was by pushing on those self-imposed hitting and stopping time delimiters. Moving price targets higher as prices got closer. And at the same time, increasing the number of trades while raising the average net profit per trade. Also, two critical numbers in any trading strategy. The task is especially difficult if you choose an optimizer like CVXOPT to do the trading since its main task is to flatten out volatility, seek a compromise for its set of weights, and thereby, often accepting involuntarily the below price target figures. To counterbalance this, I forced the optimizer to accept my weighing scheme which allowed share accumulation financed by continuously reinvesting trading profits in order to raise the overall CAGR. As trading strategy designers, it is our job to be innovative and force our strategies to do more, even if it requires strategy reengineering to do so. In the above-cited strategy, I pushed even further by adding delayed gratification. This pushed the equation toward: $\Delta p = p_{t + \tau +\kappa} - p_t $ where $\,\kappa$ represent some added time beyond the first stopping time. It means that even though your trade qualified for an exit at the first stopping time, the exit is delayed further with the average expectation that $\Delta p$ will be larger and thereby generate more profits for that trade. Notice that the shorts in that scenario were also profitable where they were mostly used for protection and capital preservation. This is the same technique I used in my DEVX8 program, which was long-only, to raise the average net profit per trade. And raising the average net profit per trade while increasing the number of round_trip trades will result in a higher payoff matrix. Which was my goal from the start. It made $\kappa$ quite valuable. Again, we are all faced with choices. It is up to us to design the best trading strategies we can.
Answer $$\cos300^\circ=\cos^2150^\circ-\sin^2150^\circ$$ 25 is matched with F. Work Step by Step $$\cos300^\circ$$ We rewrite $300^\circ$ as double the angle $150^\circ$. In detail, $$\cos300^\circ=\cos(2\times150^\circ)$$ This fact points to the use of the double identity for cosine, which states $$\cos(2A)=\cos^2A-\sin^2A$$ Therefore, for $A=150^\circ$: $$\cos300^\circ=\cos^2150^\circ-\sin^2150^\circ$$ The equation indicates that 25 should be matched with F.
Let T be a binary tree that is stored in the disk following the preorder layout. For example if this is $T$: then $T$ will be stored in the disk as follows: 10, 11, 0, 12, 13, 2, 7, 3, 14, 1, 15, 16, 4, 8, 17, 18, 5, 9, 6 Every node of the tree is a struct that stores the offset of the left child, the offset of the right child, the id of the node and a size variable. If a node is a leaf the offsets are set to -1. Suppose that now in every node $u \in T$ we want to know what is the size of the subtree rooted on $u$. Assume that the elements between the disk and the memory are transferred in blocks of size $B$, the size of the memory is $M$ and it holds that $M \geq B$. Is it possible to do that in $O(\frac{N}{B})$ I/Os?
I'm new to Linear Algebra and I'm having a hard time wrapping my head around linear transformations, specifically a rotation. From Anton's book (Elementary Linear Algebra, 11th Edition) he states: $T(e_1) = T(1,0) = (\cos\theta, \sin\theta)$ and $T(e_2) = T(0,1) = (-\sin\theta, \cos\theta)$ and $A = [T(e_1) | T(e_2)] = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}$ When I rotate a vector $\begin{bmatrix} x\\y \end{bmatrix}$ I get $\begin{bmatrix} x' \\ y' \end{bmatrix} = \begin{bmatrix} x \cdot\cos\theta \, - \, y \cdot\sin\theta \\ x \cdot\sin\theta \, + \, y \cdot\cos\theta \end{bmatrix}$ Correct me if I'm wrong, but I thought that column 1 of $A$ $\begin{bmatrix} \cos\theta \\ \sin\theta \end{bmatrix}$, holds the 'x' values and column 2 holds the 'y' values. What I'm confused about is why does $x'$ contain both an $x$ component and a $y$ component? When I transform $\begin{bmatrix}1 \\ 0 \end{bmatrix}$ by a rotation $\theta$, the new $x$ value is just $\cos\theta$ while the new $y$ value is just $\sin\theta$. I don't understand why with the matrix transformation, $x'$ and $y'$ get both $x$ and $y$ components summed together. I hope I'm making sense.
Editorial Find the first bus Serval can see in each route and find the earliest one. For each route, finding the first bus Serval sees can work in $$$O(1)$$$. Or mark all the time no more than $$$\max(s_i,t+\max(d_i))$$$ which bus would come or there will be no bus, then search the nearest one. Editorial Fill in all the bricks, and then remove all bricks you must remove (which in some view, there is empty). This can be solved in $$$O(nm)$$$. Author & preparation: Serval Editorial First let ( be $$$+1$$$, ) be $$$-1$$$ and ? be a missing place, so we will replace all the missing places in the new $$$+1$$$,$$$-1$$$ sequence by $$$+1$$$ and $$$-1$$$. Obviously, for each prefix of a correct parenthesis sequence, the sum of the new $$$+1$$$,$$$-1$$$ sequence is not less than $$$0$$$. And for the correct parenthesis sequence itself, the sum of the new sequence should be $$$0$$$. So we can calculate how many $$$+1$$$ (let $$$a$$$ denotes it) and how many $$$-1$$$ (let $$$b$$$ denotes it) that we should fill in the missing places. According to the problem, our goal is to fill the missing place with $$$+1$$$ and $$$-1$$$ to make sure there is no strict prefix (prefixes except the whole sequence itself) exists with the sum equal to $$$0$$$. This can be solved in greedy. We want the sum of prefixes as large as possible to avoid the sum touching $$$0$$$. So let the first $$$a$$$ missing places be filled with $$$+1$$$ and the last $$$b$$$ missing places be filled with $$$-1$$$. Check it whether it is a correct parenthesis sequence or not at last. The complexity is $$$O(n)$$$. Author & preparation: bzh Editorial If we want to check whether $$$x$$$ is the answer (I didn't say I want to do binary search), then we can set all the numbers no less than $$$x$$$ as $$$1$$$, and the numbers less than $$$x$$$ as $$$0$$$. Then we can use $$$dp_i$$$ to represent that the maximum number on node $$$i$$$ is the $$$dp_i$$$-th smallest number of leaves within subtree of $$$i$$$. There should be at least $$$dp_i$$$ ones in the subtree of $$$i$$$ such that the number on $$$i$$$ is one. Then $$$k+1-dp_1$$$ is the final answer. Complexity $$$O(n)$$$. Code #include <cstdio>using namespace std;struct node{ int to; node *next;};int i,j,m,n,a[1000005],f[1000005],dp[1000005],deg[1000005],k;node *nd[1000005];void addd(int u,int v){ node *p=new node(); p->to=v; p->next=nd[u]; nd[u]=p;}void dfs(int u){ node *p=nd[u]; if ((u>1)&&(deg[u]==1)){//note that u>1 dp[u]=1; k++; return; } if (a[u]) dp[u]=1000000000; else dp[u]=0; while(p){ dfs(p->to); if (a[u]){ if (dp[p->to]<dp[u]) dp[u]=dp[p->to]; }else{ dp[u]+=dp[p->to]; } p=p->next; }}int main(){ scanf("%d",&n); for(i=1;i<=n;i++){ scanf("%d",&a[i]); } for(i=2;i<=n;i++){ scanf("%d",&f[i]); deg[i]++; deg[f[i]]++; addd(f[i],i); } dfs(1); printf("%d\n",k+1-dp[1]); return 0;} Another Solution We can solve this problem in greedy. At first we use $$$leaf_u$$$ to represent the number of leaves in the subtree whose root is $$$u$$$, and $$$f_u$$$ to represent the maximum number we can get on the node $$$u$$$. Note that since we are concidering the subtree of $$$u$$$, we just number those $$$leaf_u$$$ nodes from $$$1$$$ to $$$leaf_u$$$, and $$$f_u$$$ is between $$$1$$$ and $$$leaf_u$$$, too. Let's think how to find the maximum number a node can get. If the operation of the node $$$u$$$ we concerned is $$$\max$$$, for all the nodes $$$v$$$ whose father or parent is $$$u$$$, we can find the minimum $$$leaf_v-f_v$$$. Let $$$v_{min}$$$ denotes the node reaches the minimum. And we can construct an arrangement so that the number written in the node $$$u$$$ can be $$$leaf_u-(leaf_{v_{min}}-f_{v_{min}})$$$. When we number the leaves in the subtree of $$$u$$$ from $$$1$$$ to $$$leaf_u$$$, we number the leaves in other subtrees of children of $$$u$$$ first, and then number the leaves in subtree of $$$v_{min}$$$. It can be proved this arrangement is optimal. If the operation of the node $$$u$$$ is $$$\min$$$, we can construct an arrangement of the numbers written in the leaves to make the number written in $$$u$$$ be as large as possible. For all sons or children $$$v$$$ of $$$u$$$, we number the first $$$f_v-1$$$ leaves in subtree of $$$v$$$ first according to the optimal arrangement of the node $$$v$$$. And then no matter how we arrange the remaining numbers, the number written in $$$u$$$ is $$$1+\sum_{v \text{ is a son of } u} (f_v-1)$$$. This is the optimal arrangement. We can use this method and get the final answer $$$f_1$$$. Code for Another Solution #include <cstdio>using namespace std;const int N=1000005;int n,p;int h[N],nx[N];int t[N],sz[N];void getsize(int u){ if (!h[u]) sz[u]++; for (int i=h[u];i;i=nx[i]) { getsize(i); sz[u]+=sz[i]; }}int getans(int u){ if (!h[u]) return 1; int ret=0,tmp; if (t[u]) { for (int i=h[u];i;i=nx[i]) { tmp=getans(i)+sz[u]-sz[i]; if (tmp>ret) ret=tmp; } return ret; } for (int i=h[u];i;i=nx[i]) ret+=getans(i)-1; return ret+1;}int main(){ scanf("%d",&n); for (int i=1;i<=n;i++) scanf("%d",&t[i]); for (int i=2;i<=n;i++) { scanf("%d",&p); nx[i]=h[p]; h[p]=i; } getsize(1); printf("%d\n",getans(1)); return 0;} Editorial If the answer to a rectangle is odd, there must be exactly one head or tail in that rectangle. Otherwise, there must be even number ($$$0$$$ or $$$2$$$) of head and tail in the given rectangle. We make queries for each of the columns except the last one, then we can know for each column whether there are odd number of head and tails in it or not. Because the sum is even, we can know the parity of the last column. If the head and tail are in different columns, we can find two columns with odd answer and get them. Then we can do binary search for each of those two columns separately and get the answer in no more than $$$999+10+10=1019$$$ queries totally. If the head and tail are in the same column, we will get all even answer and know that fact. Then we apply the same method for rows. Then we can just do binary search for one of the rows, and use the fact that the other is in the same column as this one. In this case, we have made no more than $$$999+999+10=2008$$$ queries. Bonus: How to save more queries? How to save one more query? We first make queries for row $$$2$$$ to row $$$n-1$$$. If we find any ones, make the last query for row, and use the method above. If we cannot find any ones, we make $$$n-1$$$ queries for columns. If none of them provide one, we know that for row $$$1$$$ and row $$$n$$$, there must be exactly one head or tail in them, and they are in the same column. In this case, we do binary search for one of the rows, then the total number of queries is $$$998+999+10=2007$$$. If we can find two ones in the columns, we know that: if one of them is in row $$$2$$$ to row $$$n-1$$$, the other must be in the same row, because for row $$$2$$$ to row $$$n$$$, we know that there are even number of head and tails, and them can't appear in the other columns. Then we do binary search, when we divide the length into two halves, we let the one close the the middle to be the longer one, and the one close to one end to be the shorter one. Then, if it turns out that, the answer is in row $$$1$$$ (or row $$$n$$$), the number of queries must be $$$\log n$$$ rounded down, and we can use one more query to identify, whether the head or tail in the other column is in row $$$1$$$ or row $$$n$$$. If it turns out that, the answer is in one of the rows in row $$$2$$$ to row $$$n$$$, we may used $$$\log n$$$ queries rounded up, but in this case, we don't need that extra query. So the total number of queries is $$$999+998+9+1=2007$$$ (or $$$999+998+10=2007$$$). In fact, if the interactor is not adaptive and we query for columns and rows randomly, we can use far less than $$$2007$$$ queries. And if we query for rows and columns alternatively, we can save more queries. Editorial Without loss of generality, assume that $$$l=1$$$. For a segment covering, the total length of the legal intervals is the probability that we choose another point $$$P$$$ on this segment randomly such that it is in the legal intervals. Since all $$$2n+1$$$ points ($$$P$$$ and the endpoints of each segment) are chosen randomly and independently, we only need to find the probability that point $$$P$$$ is in the legal intervals. Note that only the order of these $$$2n+1$$$ points make sense. Because the points are chosen in the segment, the probability that some of them coincide is $$$0$$$, so we can assume that all points do not coincide. Now the problem is, how to calculate the number of arrangements that $$$P$$$ is between at least $$$k$$$ pairs of endpoints. It can be solved by dynamic programming in time complexity of $$$O(n^2)$$$. We define $$$f(i,j,x)$$$ as the number of arrangements for the first $$$i$$$ positions, with $$$j$$$ points haven't been matched, and $$$P$$$ appeared $$$x$$$ times (obviously $$$x=0$$$ or $$$1$$$). So we can get three different types of transition for the $$$i$$$-th position below: Place $$$P$$$ at $$$i$$$-th position (if $$$j\geq k$$$): $$$f(i-1,j,0)\rightarrow f(i,j,1)$$$ Start a new segment (if $$$i+j+x<2n$$$): $$$f(i-1,j-1,x)\rightarrow f(i,j,x)$$$ Match a started segment, note that we have $$$j$$$ choices of segments: $$$f(i-1,j+1,x)\times (j+1)\rightarrow f(i,j,x)$$$ Then $$$f(2n+1,0,1)$$$ is the number of legal arrangements. Obviously, the total number of arrangements is $$$(2n+1)!$$$. However, there are $$$n$$$ pairs of endpoints whose indices can be swapped, and the indices $$$n$$$ segments can be rearranged. So the final answer is $$$\frac{f(2n+1,0,1)\times n! \times 2^n}{(2n+1)!}$$$. Code #include <cstdio>using namespace std;const int mod=998244353;const int N=4005;const int K=2005;int n,k,l;int fac,ans;int f[N][K][2];int fpw(int b,int e=mod-2){ if (!e) return 1; int ret=fpw(b,e>>1); ret=1ll*ret*ret%mod; if (e&1) ret=1ll*ret*b%mod; return ret;}int main(){ scanf("%d%d%d",&n,&k,&l); f[0][0][0]=1; for (int i=1;i<=2*n+1;i++) for (int j=0;j<=n;j++) for (int x=0;x<=1;x++) if (f[i-1][j][x]) { if (j) f[i][j-1][x]=(f[i][j-1][x]+1ll*f[i-1][j][x]*j)%mod; if (i+j-1<2*n+x) f[i][j+1][x]=(f[i][j+1][x]+f[i-1][j][x])%mod; if (j>=k&&!x) f[i][j][1]=(f[i][j][1]+f[i-1][j][x])%mod; } fac=1; for (int i=n+1;i<=2*n+1;i++) fac=1ll*fac*i%mod; ans=f[2*n+1][0][1]; ans=1ll*ans*fpw(2,n)%mod; ans=1ll*ans*fpw(fac)%mod*l%mod; printf("%d\n",ans); return 0;} UPD: We fixed some mistakes and added another solution for D.
Definition:P-adic Number/P-adic Norm Completion of Rational Numbers Jump to navigation Jump to search It is natural to Definition Notes $i) \quad$there exists a distance-preserving monomorphism $\phi:\struct{\Q,\norm {\,\cdot\,}_p} \to \struct{\Q_p,\norm {\,\cdot\,}_p}$. $iii) \quad \struct{\Q_p,\norm {\,\cdot\,}_p}$ is complete. Furthermore, by Normed Division Ring is Dense Subring of Completion the mapping $\phi: \Q \to \map \phi \Q$ is an isometric isomorphism onto the dense subfield $\map \phi \Q$ of $\Q_p$. It is natural to identify $\Q$ with $\map \phi \Q$. Also see $p$-adic Norm is Non-Archimedean Norm for a proof that $\struct {\Q, \norm {\,\cdot\,}_p}$ is a valued field. $p$-adic Norm not Complete on Rational Numbers for a proof that $\struct {\Q, \norm {\,\cdot\,}_p}$ is not a complete valued field. Completion Theorem for a proof that the completion of $\struct {\Q, \norm {\,\cdot\,}_p}$ exists and is unique up to isometric isomorphism. Normed Division Ring is Dense Subring of Completion for a proof that $\struct {\Q, \norm {\,\cdot\,}_p}$ is isometrically isomorphic to a dense subfield of $\struct {\Q_p, \norm {\,\cdot\,}_p}$. Non-Archimedean Division Ring Iff Non-Archimedean Completion, for a proof that $\norm {\, \cdot \,}_p$ on $\Q_p$ is a non-Archimedean norm.
Difference between revisions of "Multi-index notation" m (→Leibniz formula for higher derivatives of multivariate functions: typo) m (Added category TEXdone) Line 1: Line 1: + + $\def\a{\alpha}$ $\def\a{\alpha}$ $\def\b{\beta}$ $\def\b{\beta}$ Latest revision as of 11:12, 12 December 2013 $\def\a{\alpha}$ $\def\b{\beta}$ An abbreviated form of notation in analysis, imitating the vector notation by single letters rather than by listing all vector components. Contents Rules A point with coordinates $(x_1,\dots,x_n)$ in the $n$-dimensional space (real, complex or over any other field $\Bbbk$) is denoted by $x$. For a multiindex $\a=(\a_1,\dots,\a_n)\in\Z_+^n$ the expression $x^\a$ denotes the product, $x_\a=x_1^{\a_1}\cdots x_n^{\a_n}$. Other expressions related to multiindices are expanded as follows:$$\begin{aligned}|\a|&=\a_1+\cdots+\a_n\in\Z_+^n,\\\a!&=\a_1!\cdots\a_n!\qquad\text{(as usual, }0!=1!=1),\\x^\a&=x_1^{\a_1}\cdots x_n^{\a_n}\in \Bbbk[x]=\Bbbk[x_1,\dots,x_n],\\\a\pm\b&=(\a_1\pm\b_1,\dots,\a_n\pm\b_n)\in\Z^n,\end{aligned}$$The convention extends for the binomial coefficients ($\a\geqslant\b$ means, quite naturally, that $\a_1\geqslant\b_1,\dots,\a_n\geqslant\b_n$):$$\binom{\a}{\b}=\binom{\a_1}{\b_1}\cdots\binom{\a_n}{\b_n}=\frac{\a!}{\b!(\a-\b)!},\qquad \text{if}\quad \a\geqslant\b.$$The partial derivative operators are also abbreviated:$$\partial_x=\biggl(\frac{\partial}{\partial x_1},\dots,\frac{\partial}{\partial x_n}\biggr)=\partial\quad\text{if the choice of $x$ is clear from context.}$$The notation for partial derivatives is also quite natural: for a differentiable function $f(x_1,\dots,x_n)$ of $n$ variables, $$\partial^a f=\frac{\partial^{|\a|} f}{\partial x^\a}=\frac{\partial^{\a_1}}{\partial x_1^{\a_1}}\cdots\frac{\partial^{\a_n}}{\partial x_n^{\a_n}}f=\frac{\partial^{|\a|}f}{\partial x_1^{\a_1}\cdots\partial x_n^{\a_n}}.$$If $f$ is itself a vector-valued function of dimension $m$, the above partial derivatives are $m$-vectors. The notation $$\partial f=\bigg(\frac{\partial f}{\partial x}\bigg)$$ is used to denote the Jacobian matrix of a function $f$ (in general, only rectangular). Caveat The notation $\a>0$ is ambiguous, especially in mathematical economics, as it may either mean that $\a_1>0,\dots,\a_n>0$, or $0\ne\a\geqslant0$. Examples Binomial formula $$ (x+y)^\a=\sum_{0\leqslant\b\leqslant\a}\binom\a\b x^{\a-\b} y^\b. $$ Leibniz formula for higher derivatives of multivariate functions $$ \partial^\a(fg)=\sum_{0\leqslant\b\leqslant\a}\binom\a\b \partial^{\a-\b}f\cdot \partial^\b g. $$ In particular, $$ \partial^\a x^\beta=\begin{cases} \frac{\b!}{(\b-\a)!}x^{\b-\a},\qquad&\text{if }\a\leqslant\b, \\ \quad 0,&\text{otherwise}. \end{cases} $$ Taylor series of a smooth function If $f$ is infinitely smooth near the origin $x=0$, then its Taylor series (at the origin) has the form $$ \sum_{\a\in\Z_+^n}\frac1{\a!}\partial^\a f(0)\cdot x^\a. $$ Symbol of a differential operator If $$D=\sum_{|\a|\le d}a_\a(x)\partial^\a$$is a linear ordinary differential operator with variable coefficients $a_\a(x)$, then its principal symbol is the function of $2n$ variables $S(x,p)=\sum_{|\a|=d}a_\a(x)p^\a$. How to Cite This Entry: Multi-index notation. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Multi-index_notation&oldid=25759
So here this is a question related to the grid efficiency in MSN minesweeper. $\begin{matrix} \square & \square & B_1 & B_2 & B_3 & \square & \square\\ \cdots & 1 & 1 & 1 & 1 & 1 & \cdots \\ \cdots & 0 & 0 & 0 & 0 & 0 & \cdots \end{matrix}$ Suppose that you have a line of 1s along a long enough edge (basically you do not get any information from the two endpoints of the edge), then at any of the 1s, the chance of a mine appearing on a random grid should be 1/3...or is it? Assume that the universal distribution is uniform. I.e. that is an equal chance of getting any of the $C^n_r$ many combinations (ignoring the corner rule since it is less relevant). Similarly without further hints, if we have n remaining unrevealed grids and r remaining mines, then the number of possible combinations is again $C^n_r$. Out of the formula there is one single variable that varies with actual config - the number of remaining mines depends on the length of the edge modulo 3. Let $r_i$ be the remaining mines count after revealing the row given the configuration $B_i$. Then $P(B_i) = \frac{C^n_{r_i}}{\sum C^n _{r_k}}$ If the number of mines to be revealed is independent of the configuration then we can conclude that the chance are equal. Otherwise we can compare the binomial terms: $C^n_{r+1} = C^n_r \frac{n-r-1}{r+1}$ The usual ratio between grids and mines is 5:1 -- but that varies greatly depending on the given situation. In particular there are loads of 1s and 0s in our example, which boosts the ratio greatly. At the ratio of 2:1 the increase of mines would not increase the likelihood of the evisceration, but of course such ratio is unrealistic in a minesweeper setup, so we can say that the configuration that the configuration that leads to less mines on the row, is more likely. One may raise the question: shouldn't the equilibrium happens at 3:1 instead of 2:1, from a likelihood perspective? We can visualize the likelihood approach by the following problem. In a box there are $n$ balls, in which $r\approx n/x$ are white, and the rest are black. (i.e., the ratio between balls and white balls is $x:1$). Suppose we draw $\alpha$ balls out of it and we want to compare the chance of having $\beta \approx \alpha /3$ or $\beta +1$ white balls in the draw. If we want the chance between the two events to be comparable: $\frac{C^{n-\alpha}_{r-\beta}C^{\alpha}_{\beta}}{C^{n-\alpha}_{r-\beta-1}C^{\alpha}_{\beta +1}} = \frac{(n-\alpha-(r-\beta))(\beta +1)}{(r-\beta)(\alpha -\beta-1)} \approx \frac{1}{2} \frac{n-\alpha - (r-\beta)}{r-\beta} = 1$ That gives $n-\alpha \approx 3(r-\beta)$, or $n \approx 3r$. In our problem however, the term $C^{\alpha}_{\beta}$ does not exist, because we do not take the full combination within the balls drawn. In other words, we do not allow orders like BBBWWWWWW BBWWBWWWW BWWWWWWBB... The three only allowed scenarios are BWWBWWBWW WBWWBWWBW WWBWWBWWB Once we fixed that in our formula, similar calculations yield $n\approx 2r$, instead of $3r$. * Of course there is no such 'infinite edge' in minesweeper. The endpoints is definitely giving extra information unless it looks like the following: $\begin{matrix} \square & \square & \square & \square & \square & \square & \square & \square & \square\\ \square & 1 & 1 & 1 & \cdots & 1 & 1 & 1 & \square\\ \square & 1 & 0 & 0 & \cdots & 0 & 0 & 1 & \square\\ \square & 1 & 0 & 0 & \cdots & 0 & 0 & 1 & \square\\ \square & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \square \end{matrix}$ Any extra information would impose extra difficulty on these kind of analyses, but I am here to give some primary idea on what's happening. To give an application, consider the following example. I said on my twitter that red made a bad move: The reason behind is simple. There is only [something between 1/4 and 1/2]chance that the grid contains a mine. On the other hand if it does not the opponent gets a free flag immediately with everything else remains the same. But...is it possible to calculate the exact chance? Yes, and this is not hard at all: this is a 16x16 board with 51 mines. The shown area is the only revealed portion. To calculate the probability we list the few only scenarios then the chance follows by the binomial coefficients.
Gibbs free energy can be defined as: $$dG=VdP-SdT+\sum_i\mu_idn_i$$ where $\mu=(\frac{\partial G}{\partial n})_{P,T}$. Last term, $\sum_i\mu_idn_i$ allows for open systems to be considered (where $dn$ does not equal zero). However, this can also be used for state functions like enthalpy too: $$dH=TdS+VdP+\sum_i\mu_idn_i$$ But my question is why can this be used for enthalpy, considering that $\mu=(\frac{\partial G}{\partial n})_{P,T}$ (i.e it's a partial derivative of $G$ not $H$). Perhaps it's clearer for me to write the total derivative out like this: $$dH=(\frac{\partial H}{\partial S})_{P,n}dS+(\frac{\partial H}{\partial P})_{S,n}dP+(\frac{\partial G}{\partial n})_{P,T}dn$$ where I have considered a single-component open system (the component may enter or leave the system) Actually, there is a completely valid definition of chemical potential in terms of enthalpy (I realize your confusion on this point may be my fault, since I initially told you the opposite on another post.) The correct form for chemical potential as a partial molar enthalpy is (as I think you already suspected): $$ \mu_i=(\frac {\partial H}{\partial n_i})_{p,S,n_{j\neq i}} $$ So the total exact differential is expressible as you had originally expected: $$ dH=TdS+VdP+\sum_i\mu_idn_i=(\frac {\partial H}{\partial S})_{p,n_i}dS+(\frac {\partial H}{\partial p})_{S,n_i}dS+\sum_i(\frac {\partial H}{\partial n_i})_{p,S,n_{j\neq i}}dn_i $$ The definition of chemical potential as a partial molar Gibbs free energy is only valid at constant temperature and pressure, so it wouldn't make any sense to write it that way in the expression for the exact differential of enthalpy. Sorry for any confusion my initial errors may have caused you. Hope this clears it up.
Skills to Develop Understand linear, square, and cubic measure Use properties of rectangles Use properties of triangles Use properties of trapezoids be prepared! Before you get started, take this readiness quiz. The length of a rectangle is 3 less than the width. Let w represent the width. Write an expression for the length of the rectangle. If you missed this problem, review Example 2.26. Simplify: \(\frac{1}{2}\)(6h). If you missed this problem, review Example 7.7. Simplify: \(\frac{5}{2}\)(10.3 − 7.9). If you missed this problem, review Example 5.36. In this section, we’ll continue working with geometry applications. We will add some more properties of triangles, and we’ll learn about the properties of rectangles and trapezoids. Understand Linear, Square, and Cubic Measure When you measure your height or the length of a garden hose, you use a ruler or tape measure (Figure 9.13). A tape measure might remind you of a line—you use it for linear measure, which measures length. Inch, foot, yard, mile, centimeter and meter are units of linear measure. Figure 9.13 - This tape measure measures inches along the top and centimeters along the bottom. When you want to know how much tile is needed to cover a floor, or the size of a wall to be painted, you need to know the area, a measure of the region needed to cover a surface. Area is measured is square units. We often use square inches, square feet, square centimeters, or square miles to measure area. A square centimeter is a square that is one centimeter (cm) on each side. A square inch is a square that is one inch on each side (Figure 9.14). Figure 9.14 - Square measures have sides that are each 1 unit in length. Figure 9.15 shows a rectangular rug that is 2 feet long by 3 feet wide. Each square is 1 foot wide by 1 foot long, or 1 square foot. The rug is made of 6 squares. The area of the rug is 6 square feet. Figure 9.15 - The rug contains six squares of 1 square foot each, so the total area of the rug is 6 square feet. When you measure how much it takes to fill a container, such as the amount of gasoline that can fit in a tank, or the amount of medicine in a syringe, you are measuring volume. Volume is measured in cubic units such as cubic inches or cubic centimeters. When measuring the volume of a rectangular solid, you measure how many cubes fill the container. We often use cubic centimeters, cubic inches, and cubic feet. A cubic centimeter is a cube that measures one centimeter on each side, while a cubic inch is a cube that measures one inch on each side (Figure 9.16). Figure 9.16 - Cubic measures have sides that are 1 unit in length. Suppose the cube in Figure 9.17 measures 3 inches on each side and is cut on the lines shown. How many little cubes does it contain? If we were to take the big cube apart, we would find 27 little cubes, with each one measuring one inch on all sides. So each little cube has a volume of 1 cubic inch, and the volume of the big cube is 27 cubic inches. Figure 9.17 - A cube that measures 3 inches on each side is made up of 27 one-inch cubes, or 27 cubic inches. Example 9.25: For each item, state whether you would use linear, square, or cubic measure: (a) amount of carpeting needed in a room (b) extension cord length (c) amount of sand in a sandbox (d) length of a curtain rod (e) amount of flour in a canister (f) size of the roof of a doghouse. Solution (a) You are measuring how much surface the carpet covers, which is the area. square measure (b) You are measuring how long the extension cord is, which is the length. linear measure (c) You are measuring the volume of the sand. cubic measure (d) You are measuring the length of the curtain rod. linear measure (e) You are measuring the volume of the flour. cubic measure (f) You are measuring the area of the roof. square measure Exercise 9.49: Determine whether you would use linear, square, or cubic measure for each item. (a) amount of paint in a can (b) height of a tree (c) floor of your bedroom (d) diameter of bike wheel (e) size of a piece of sod (f) amount of water in a swimming pool Exercise 9.50: Determine whether you would use linear, square, or cubic measure for each item. (a) volume of a packing box (b) size of patio (c) amount of medicine in a syringe (d) length of a piece of yarn (e) size of housing lot (f) height of a flagpole Many geometry applications will involve finding the perimeter or the area of a figure. There are also many applications of perimeter and area in everyday life, so it is important to make sure you understand what they each mean. Picture a room that needs new floor tiles. The tiles come in squares that are a foot on each side—one square foot. How many of those squares are needed to cover the floor? This is the area of the floor. Next, think about putting new baseboard around the room, once the tiles have been laid. To figure out how many strips are needed, you must know the distance around the room. You would use a tape measure to measure the number of feet around the room. This distance is the perimeter. Definition: Perimeter and Area The perimeter is a measure of the distance around a figure. The area is a measure of the surface covered by a figure. Figure 9.18 shows a square tile that is 1 inch on each side. If an ant walked around the edge of the tile, it would walk 4 inches. This distance is the perimeter of the tile. Since the tile is a square that is 1 inch on each side, its area is one square inch. The area of a shape is measured by determining how many square units cover the shape. Figure 9.18 - Perimeter = 4 inches, Area = 1 square inch. When the ant walks completely around the tile on its edge, it is tracing the perimeter of the tile. The area of the tile is 1 square inch. Example 9.26: Each of two square tiles is 1 square inch. Two tiles are shown together. (a) What is the perimeter of the figure? (b) What is the area? Solution (a) The perimeter is the distance around the figure. The perimeter is 6 inches. (b) The area is the surface covered by the figure. There are 2 square inch tiles so the area is 2 square inches. Exercise 9.51: Find the (a) perimeter and (b) area of the figure: Exercise 9.52: Find the (a) perimeter and (b) area of the figure: Use the Properties of Rectangles A rectangle has four sides and four right angles. The opposite sides of a rectangle are the same length. We refer to one side of the rectangle as the length, L, and the adjacent side as the width, W. See Figure 9.19. Figure 9.19 - A rectangle has four sides, and four right angles. The sides are labeled L for length and W for width. The perimeter, P, of the rectangle is the distance around the rectangle. If you started at one corner and walked around the rectangle, you would walk L + W + L + W units, or two lengths and two widths. The perimeter then is $$\begin{split} P = L + &W + L + W \\ &or \\ P = 2L &+ 2W \end{split}$$ What about the area of a rectangle? Remember the rectangular rug from the beginning of this section. It was 2 feet long by 3 feet wide, and its area was 6 square feet. See Figure 9.20. Since A = 2 • 3, we see that the area, A, is the length, L, times the width, W, so the area of a rectangle is A = L • W. Figure 9.20 - The area of this rectangular rug is 6 square feet, its length times its width. Definition: Properties of Rectangles Rectangles have four sides and four right (90°) angles. The lengths of opposite sides are equal. The perimeter, P, of a rectangle is the sum of twice the length and twice the width. See Figure 9.19.$$P = 2L + 2W$$ The area, A, of a rectangle is the length times the width.$$A = L \cdot W$$ For easy reference as we work the examples in this section, we will restate the Problem Solving Strategy for Geometry Applications here. HOW TO: USE A PROBLEM SOLVING STRATEGY FOR GEOMETRY APPLICATIONS Step 1. Read the problem and make sure you understand all the words and ideas. Draw the figure and label it with the given information. Step 2. Identify what you are looking for. Step 3. Name what you are looking for. Choose a variable to represent that quantity. Step 4. Translate into an equation by writing the appropriate formula or model for the situation. Substitute in the given information. Step 5. Solve the equation using good algebra techniques. Step 6. Check the answer in the problem and make sure it makes sense. Step 7. Answer the question with a complete sentence. Example 9.27: The length of a rectangle is 32 meters and the width is 20 meters. Find (a) the perimeter, and (b) the area. Solution (a) Step 1. Read the problem. Draw the figure and label it with the given information. Step 2. Identify what you are looking for. the perimeter of a rectangle Step 3. Name. Choose a variable to represent it. Let P = the perimeter Step 4. Translate. Write the appropriate formula. Substitute. Step 5. Solve the equation. $$\begin{split} P &= 64 + 40 \\ P &= 104 \end{split}$$ Step 6. Check. $$\begin{split} P &\stackrel{?}{=} 104 \\ 20 + 32 + 20 + 32 &\stackrel{?}{=} 104 \\ 104 &= 104\; \checkmark \end{split}$$ Step 7. Answer the question. The perimeter of the rectangle is 104 meters. (b) Step 1. Read the problem. Draw the figure and label it with the given information. Step 2. Identify what you are looking for. the area of a rectangle Step 3. Name. Choose a variable to represent it. Let A = the area Step 4. Translate. Write the appropriate formula. Substitute. Step 5. Solve the equation. $$A = 640$$ Step 6. Check. $$\begin{split} A &\stackrel{?}{=} 640 \\ 32 \cdot 20 &\stackrel{?}{=} 640 \\ 640 &= 640\; \checkmark \end{split}$$ Step 7. Answer the question. The area of the rectangle is 60 square meters. Exercise 9.53: The length of a rectangle is 120 yards and the width is 50 yards. Find (a) the perimeter and (b) the area. Exercise 9.54: The length of a rectangle is 62 feet and the width is 48 feet. Find (a) the perimeter and (b) the area. Example 9.28: Find the length of a rectangle with perimeter 50 inches and width 10 inches. Solution Step 1. Read the problem. Draw the figure and label it with the given information. Step 2. Identify what you are looking for. the length of the rectangle Step 3. Name. Choose a variable to represent it. Let L = the length Step 4. Translate. Write the appropriate formula. Substitute. Step 5. Solve the equation. $$\begin{split} 50 \textcolor{red}{-20} &= 2L + 20 \textcolor{red}{-20} \\ 30 &= 2L \\ \frac{30}{\textcolor{red}{2}} &= \frac{2L}{\textcolor{red}{2}} \\ 15 &= L \end{split}$$ Step 6. Check. $$\begin{split} P &\stackrel{?}{=} 50 \\ 15 + 10 + 15 + 10 &\stackrel{?}{=} 50 \\ 50 &= 50\; \checkmark \end{split}$$ Step 7. Answer the question. The length is 15 inches. Exercise 9.55: Find the length of a rectangle with a perimeter of 80 inches and width of 25 inches. Exercise 9.56: Find the length of a rectangle with a perimeter of 30 yards and width of 6 yards. In the next example, the width is defined in terms of the length. We’ll wait to draw the figure until we write an expression for the width so that we can label one side with that expression. Example 9.29: The width of a rectangle is two inches less than the length. The perimeter is 52 inches. Find the length and width. Solution Step 1. Read the problem. Step 2. Identify what you are looking for. the length and width of the rectangle Step 3. Name. Choose a variable to represent it. Now we can draw a figure using these expressions for the length and width. Since the width is defined in terms of the length, we let L = length. The width is two feet less that the length, so we let L − 2 = width. Step 4. Translate. Write the appropriate formula. The formula for the perimeter of a rectangle relates all the information. Substitute in the given information. Step 5. Solve the equation. $$52 = 2L + 2L - 4$$ Combine like terms. $$52 = 4L - 4$$ Add 4 to each side. $$56 = 4L$$ Divide by 4. $$\begin{split} \frac{56}{4} &= \frac{4L}{4} \\ 14 &= L \\ 14 &= L \end{split}$$The length is 14 inches. Now we need to find the width. The width is L − 2. $$\begin{split} &L - 2 \\ &\textcolor{red}{14} - 2 \\ &12 \end{split}$$The width is 12 inches. Step 6. Check. Since 14 + 12 + 14 + 12 = 52 , this works! Step 7. Answer the question. The length is 14 feet and the width is 12 feet. Exercise 9.57: The width of a rectangle is seven meters less than the length. The perimeter is 58 meters. Find the length and width. Exercise 9.58: The length of a rectangle is eight feet more than the width. The perimeter is 60 feet. Find the length and width. Example 9.30: The length of a rectangle is four centimeters more than twice the width. The perimeter is 32 centimeters. Find the length and width. Solution Step 1. Read the problem. Step 2. Identify what you are looking for. the length and width Step 3. Name. Choose a variable to represent it. let W = width The length is four more than twice the width. 2w + 4 = length Step 4. Translate. Write the appropriate formula and substitute in the given information. Step 5. Solve the equation. $$\begin{split} 32 &= 4w + 8 + 2w \\ 32 &= 6w + 8 \\ 24 &= 6w \\ 4 &= w\quad width \\ 2w &+ 4\quad length \\ 2(\textcolor{red}{4}) &+ 4 \\ 12&\quad The\; length\; is\; 12\; cm \ldotp \end{split}$$ Step 6. Check. $$\begin{split} p &= 2L + 2W \\ 32 &\stackrel{?}{=} 2 \cdot 12 + 2 \cdot 4 \\ 32 &= 32\; \checkmark \end{split}$$ Step 7. Answer the question. The length is 12 cm and the width is 4 cm. Exercise 9.59: The length of a rectangle is eight more than twice the width. The perimeter is 64 feet. Find the length and width. Exercise 9.60: The width of a rectangle is six less than twice the length. The perimeter is 18 centimeters. Find the length and width. Example 9.31: The area of a rectangular room is 168 square feet. The length is 14 feet. What is the width? Solution Step 1. Read the problem. Step 2. Identify what you are looking for. the width of a rectangular room Step 3. Name. Choose a variable to represent it. Let W = width Step 4. Translate. Write the appropriate formula and substitute in the given information. $$\begin{split} A &= LW \\ 168 &= 14W \end{split}$$ Step 5. Solve the equation. $$\begin{split} \frac{168}{14} &= \frac{14W}{14} \\ 12 &= W \end{split}$$ Step 6. Check. $$\begin{split} A &= LW \\ 168 &\stackrel{?}{=} 14 \cdot 12 \\ 168 &= 168\; \checkmark \end{split}$$ Step 7. Answer the question. The width of the room is 12 feet. Exercise 9.61: The area of a rectangle is 598 square feet. The length is 23 feet. What is the width? Exercise 9.62: The width of a rectangle is 21 meters. The area is 609 square meters. What is the length? Example 9.32: The perimeter of a rectangular swimming pool is 150 feet. The length is 15 feet more than the width. Find the length and width. Solution Step 1. Read the problem. Draw the figure and label it with the given information. Step 2. Identify what you are looking for. the length and width of the pool Step 3. Name. Choose a variable to represent it. The length is 15 feet more than the width. Let W = width W + 15 = length Step 4. Translate. Write the appropriate formula and substitute. Step 5. Solve the equation. $$\begin{split} 150 &= 2w + 30 + 2w \\ 150 &= 4w + 30 \\ 120 &= 4w \\ 30 &= w\quad the\; width\; of\; the\; pool \\ w &+ 15\quad the\; length\; of\; the\; pool \\ \textcolor{red}{30} &+ 15 \\ 45& \end{split}$$ Step 6. Check. $$\begin{split} p &= 2L + 2W \\ 150 &\stackrel{?}{=} 2(45) + 2(30) \\ 150 &= 150\; \checkmark \end{split}$$ Step 7. Answer the question. The length of the pool is 45 feet and the width is 30 feet. Exercise 9.63: The perimeter of a rectangular swimming pool is 200 feet. The length is 40 feet more than the width. Find the length and width. Exercise 9.64: The length of a rectangular garden is 30 yards more than the width. The perimeter is 300 yards. Find the length and width. Contributors Lynn Marecek (Santa Ana College) and MaryAnne Anthony-Smith (Formerly of Santa Ana College). This content is licensed under Creative Commons Attribution License v4.0 "Download for free at http://cnx.org/contents/fd53eae1-fa2...49835c3c@5.191."
Line 1: Line 1: − A concept from the theory of almost-periodic functions (cf. [[Almost-periodic function|Almost-periodic function]]); a generalization of the notion of a period. For a uniformly almost-periodic function < img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a0119501.png" />, < img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a0119502.png" />, a number <img align= "absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a0119503.png" /> is called an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a0119505.png" />-almost-period of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a0119506.png" /> if for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a0119507.png" />, + + A concept from the theory of almost-periodic functions (cf. [[Almost-periodic function|Almost-periodic function]]); a generalization of the notion of a period. For a uniformly almost-periodic function <<, a number =is called an -almost-period of if for all , − <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text- align:center;">< img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a0119508. png" /></td> </tr></table> + -<. − For generalized almost-periodic functions the concept of an almost-period is more complicated. For example, in the space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a0119509.png" /> an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a01195010.png" />-almost-period <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a01195011.png" /> is defined by the inequality + For generalized almost-periodic functions the concept of an almost-period is more complicated. For example, in the space an -almost-period is defined by the inequality − < table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a01195012.png" /></td> </tr></table> + < − where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a01195013.png" /> is the distance between <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a01195014.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a01195015.png" /> in the metric of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a01195016.png" />. + where is the distance between and in the metric of . − A set of almost-periods of a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a01195017.png" /> is said to be relatively dense if there is a number <img align= "absmiddle" border="0 " src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a01195018.png" /> such that every interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a01195019.png" /> of the real line contains at least one number from this set. The concepts of uniformly almost-periodic functions and that of Stepanov almost-periodic functions may be defined by requiring the existence of relatively-dense sets of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a01195020.png" />-almost-periods for these functions. + A set of almost-periods of a function is said to be relatively dense if there is a number =0such that every interval of the real line contains at least one number from this set. The concepts of uniformly almost-periodic functions and that of Stepanov almost-periodic functions may be defined by requiring the existence of relatively-dense sets of -almost-periods for these functions. ====References==== ====References==== Line 17: Line 18: ====Comments==== ====Comments==== − For the definition of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a01195021.png" /> and its metric <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a01195022.png" /> see [[Almost-periodic function|Almost-periodic function]]. The Weyl, Besicovitch and Levitan almost-periodic functions can also be characterized in terms of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a01195023.png" /> <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011950/a01195024.png" />-periods. These characterizations are more complicated. A good additional reference is [[#References|[a1]]], especially Chapt. II. + For the definition of and its metric see [[Almost-periodic function|Almost-periodic function]]. The Weyl, Besicovitch and Levitan almost-periodic functions can also be characterized in terms of -periods. These characterizations are more complicated. A good additional reference is [[#References|[a1]]], especially Chapt. II. ====References==== ====References==== <table><TR><TD valign="top">[a1]</TD> <TD valign="top"> A.S. Besicovitch, "Almost periodic functions" , Cambridge Univ. Press (1932)</TD></TR></table> <table><TR><TD valign="top">[a1]</TD> <TD valign="top"> A.S. Besicovitch, "Almost periodic functions" , Cambridge Univ. Press (1932)</TD></TR></table> Latest revision as of 15:17, 18 July 2014 A concept from the theory of almost-periodic functions (cf. Almost-periodic function); a generalization of the notion of a period. For a uniformly almost-periodic function $f(x)$, $-\infty<x<\infty$, a number $\tau=\tau_f(\epsilon)$ is called an $\epsilon$-almost-period of $f(x)$ if for all $x$, $$|f(x+\tau)-f(x)|<\epsilon.$$ For generalized almost-periodic functions the concept of an almost-period is more complicated. For example, in the space $S_l^p$ an $\epsilon$-almost-period $\tau$ is defined by the inequality $$D_{S_l^p}[f(x+\tau),f(x)]<\epsilon,$$ where $D_{S_l^p}[f,\phi]$ is the distance between $f(x)$ and $\phi(x)$ in the metric of $S_l^p$. A set of almost-periods of a function $f(x)$ is said to be relatively dense if there is a number $L=L(\epsilon,f)>0$ such that every interval $(\alpha,\alpha+L)$ of the real line contains at least one number from this set. The concepts of uniformly almost-periodic functions and that of Stepanov almost-periodic functions may be defined by requiring the existence of relatively-dense sets of $\epsilon$-almost-periods for these functions. References [1] B.M. Levitan, "Almost-periodic functions" , Moscow (1953) (In Russian) For the definition of $S_l^p$ and its metric $D_{S_l^p}$ see Almost-periodic function. The Weyl, Besicovitch and Levitan almost-periodic functions can also be characterized in terms of $S_l^p$ $\epsilon$-periods. These characterizations are more complicated. A good additional reference is [a1], especially Chapt. II. References [a1] A.S. Besicovitch, "Almost periodic functions" , Cambridge Univ. Press (1932) How to Cite This Entry: Almost-period. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Almost-period&oldid=17082 This article was adapted from an original article by E.A. Bredikhina (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Buscar Mostrando ítems 1-10 de 351 Series de Fourier aplicadas a problemas de cálculo de variaciones con retardo Series de Fourier aplicadas a problemas de cálculo de variaciones con retardo (2012-03-22) In this article we present an approximation of the minimizing function of the functional J[x]=\int_0^T F(t,X(t),X(t-\tau),\dot{X}(t))dtby approximating X(t) with Cosine Fourier series expansions X_n(t). We give conditions ... Hércules contra la Hidra y la muerte del Internet Hércules contra la Hidra y la muerte del Internet (2011-04-29) Hercules killed the Hydra of Lerna in a bloody battle—the second of the labor tasks imposed upon him in atonement for his hideous crimes. The Hydra was a horrible, aggressive mythological monster with many heads and poisonous ... A mixed-effects model for growth curves analysis in a two-way crossed classification layout (2009-02-20) We propose a mixed-effects linear model for analyzing growth curves data obtainedusing a two-way classification experiment. The model combines an unconstrainedmeans model and a regression model on the time, in which the ... Agrupamiento de Filas y Columnas Homogéneas en Modelos de Correspondencia Agrupamiento de Filas y Columnas Homogéneas en Modelos de Correspondencia (2011-04-29) Goodman(1981) proposed homogeneity and structures criterias in Associaton Models which allow to determine if certain rows or columns in a contingency table should be grouped. In later works, he showed the relations between ... Algoritmos Numéricos para el Problema de Restauración de Imágenes usando el Método de las Proyecciones Alternantes Algoritmos Numéricos para el Problema de Restauración de Imágenes usando el Método de las Proyecciones Alternantes (2011-04-29) The projection algorithms have evolved from the alternating projection method proposed by J. von Neumann in 1933, who treated the problem of finding the projection of a given point in a Hilbert space onto the intersection ... Invariant Manifolds in Parametric turbulent Models Invariant Manifolds in Parametric turbulent Models (2011-04-29) The article is devoted to examining the so-called local-equilibrium approximations used while modeling turbulent flows. The dynamics of a far plane turbulent wake are investigated as an example. In this article, we analyze ... Interval Mathematics Applied to Critical Point Transitions Interval Mathematics Applied to Critical Point Transitions (2012-03-02) The determination of critical points of mixtures is important for both practical and theoretical reasons in the modeling of phase behavior, especially at high pressure. The equations that describe the behavior of complex ... Sobre el problema inverso de difusión Sobre el problema inverso de difusión (2009-02-20) Infiltration is physically described in order to model it as a diffusion stochasticprocess. Theorem M-B 1 is enunciated; whose main objective is the inverse diffusionproblem. The theorem is demonstrated in the specific ... El problema del conjunto independiente en la selección de horarios de cursos El problema del conjunto independiente en la selección de horarios de cursos (2009-02-20) Registration process at the Universidad Aut´onoma Metropolitana is such thatevery student is free to choose his/her own subjects and schedule. Success of thissystem, based in the percentage of students that obtain a place ... Term Structure of Interest Rates Term Structure of Interest Rates (2012-03-02) The risk free rate on bonds is a very important quantity that allows calculation of premium values on bonds. This quantity of stochastic nature has been modeled with different degrees of sophistication. This paper reviews ...
Solve Equations Using a General Strategy Each of the first few sections of this chapter has dealt with solving one specific form of a linear equation. It’s time now to lay out an overall strategy that can be used to solve any linear equation. We call this the general strategy. Some equations won’t require all the steps to solve, but many will. Simplifying each side of the equation as much as possible first makes the rest of the steps easier. HOW TO: USE A GENERAL STRATEGY FOR SOLVING LINEAR EQUATIONS Step 1. Simplify each side of the equation as much as possible. Use the Distributive Property to remove any parentheses. Combine like terms. Step 2. Collect all the variable terms to one side of the equation. Use the Addition or Subtraction Property of Equality. Step 3. Collect all the constant terms to the other side of the equation. Use the Addition or Subtraction Property of Equality. Step 4. Make the coefficient of the variable term to equal to 1. Use the Multiplication or Division Property of Equality. State the solution to the equation. Step 5. Check the solution. Substitute the solution into the original equation to make sure the result is a true statement. Example 8.30: Solve: 3(x + 2) = 18. Solution Simplify each side of the equation as much as possible. Use the Distributive Property. $$3x + 6 = 18 \tag{8.3.46}$$ Collect all variable terms on one side of the equation—all x's are already on the left side. Collect constant terms on the other side of the equation. Subtract 6 from each side. $$3x + 6 \textcolor{red}{-6} = 18 \textcolor{red}{-6} \tag{8.3.47}$$ Simplify. $$3x = 12 \tag{8.3.48}$$ Make the coefficient of the variable term equal to 1. Divide each side by 3. $$\frac{3x}{\textcolor{red}{3}} = \frac{12}{\textcolor{red}{3}} \tag{8.3.49}$$ Simplify. $$x = 4 \tag{8.3.50}$$ Check: Let x = 4. $$\begin{split} 3(x + 2) &= 18 \\ 3(\textcolor{red}{4} + 2 &\stackrel{?}{=} 18 \\ 3(6) &\stackrel{?}{=} 18 \\ 18 &\stackrel{?}{=} 18\; \checkmark \end{split}$$ Exercise 8.59: Solve: 5(x + 3) = 35. Exercise 8.60: Solve: 6(y − 4) = −18. Example 8.31: Solve: −(x + 5) = 7. Solution Simplify each side of the equation as much as possible by distributing. The only x term is on the left side, so all variable terms are on the left side of the equation. $$-x - 5 = 7 \tag{8.3.51}$$ Add 5 to both sides to get all constant terms on the right side of the equation. $$-x - 5 \textcolor{red}{+5} = 7 \textcolor{red}{+5} \tag{8.3.52}$$ Simplify. $$-x = 12 \tag{8.3.53}$$ Make the coefficient of the variable term equal to 1 by multiplying both sides by -1. $$\textcolor{red}{-1} (-x) = \textcolor{red}{-1} (12) \tag{8.3.54}$$ Simplify. $$x = -12 \tag{8.3.55}$$ Check: Let x = −12. $$\begin{split} -(x + 5) &= 7 \\ -(\textcolor{red}{-12} + 5) &\stackrel{?}{=} 7 \\ -(-7) &\stackrel{?}{=} 7 \\ 7 &= 7\; \checkmark \end{split}$$ Exercise 8.61: Solve: −(y + 8) = −2. Exercise 8.62: Solve: −(z + 4) = −12. Example 8.32: Solve: 4(x − 2) + 5 = −3. Solution Simplify each side of the equation as much as possible. Distribute. $$4x - 8 + 5 = -3 \tag{8.3.56}$$ Combine like terms. $$4x - 3 = -3 \tag{8.3.57}$$ The only x is on the left side, so all variable terms are on one side of the equation. Add 3 to both sides to get all constant terms on the other side of the equation. $$4x - 3 \textcolor{red}{+3} = -3 \textcolor{red}{+3} \tag{8.3.58}$$ Simplify. $$4x = 0 \tag{8.3.59}$$ Make the coefficient of the variable term equal to 1 by dividing both sides by 4. $$\frac{4x}{\textcolor{red}{4}} = \frac{0}{\textcolor{red}{4}} \tag{8.3.60}$$ Simplify. $$x = 0 \tag{8.3.61}$$ Check: Let x = 0. $$\begin{split} 4(x - 2) + 5 &= -3 \\ 4(\textcolor{red}{0} - 2) + 5 &\stackrel{?}{=} -3 \\ 4(-2) + 5 &\stackrel{?}{=} -3 \\ -8 + 5 &\stackrel{?}{=} -3 \\ -3 &= -3\; \checkmark \end{split}$$ Exercise 8.63: Solve: 2(a − 4) + 3 = −1. Exercise 8.64: Solve: 7(n − 3) − 8 = −15. Example 8.33: Solve: 8 − 2(3y + 5) = 0. Solution Be careful when distributing the negative. Simplify—use the Distributive Property. $$8 - 6y - 10 = 0 \tag{8.3.62}$$ Combine like terms. $$-6y - 2 = 0 \tag{8.3.63}$$ Add 2 to both sides to collect constants on the right. $$-6y - 2 \textcolor{red}{+2} = 0 \textcolor{red}{+2} \tag{8.3.64}$$ Simplify. $$y = - \frac{1}{3} \tag{8.3.65}$$ Divide both sides by −6. $$\frac{-6y}{\textcolor{red}{-6}} = \frac{2}{\textcolor{red}{-6}} \tag{8.3.66}$$ Simplify. $$y = - \frac{1}{3} \tag{8.3.67}$$ Check: Let y = \(− \frac{1}{3}\). $$\begin{split} 8 - 2(3y + 5) &= 0 \\ 8 - 2 \Bigg[ 3 \left(\textcolor{red}{- \dfrac{1}{3}}\right) + 5 \Bigg] &= 0 \\ 8 - 2(-1 + 5) &\stackrel{?}{=} 0 \\ 8 - 2(4) &\stackrel{?}{=} 0 \\ 8 - 8 &\stackrel{?}{=} 0 \\ 0 &= 0; \checkmark \end{split}$$ Exercise 8.65: Solve: 12 − 3(4j + 3) = −17. Exercise 8.66: Solve: −6 − 8(k − 2) = −10. Example 8.34: Solve: 3(x − 2) − 5 = 4(2x + 1) + 5. Solution Distribute. $$3x - 6 - 5 = 8x + 4 + 5 \tag{8.3.68}$$ Combine like terms. $$3x - 11 = 8x + 9 \tag{8.3.69}$$ Subtract 3x to get all the variables on the right since 8 > 3. $$3x \textcolor{red}{-3x} - 11 = 8x \textcolor{red}{-3x} + 9 \tag{8.3.70}$$ Simplify. $$-11 = 5x + 9 \tag{8.3.71}$$ Subtract 9 to get the constants on the left. $$-11 \textcolor{red}{-9} = 5x + 9 \textcolor{red}{-9} \tag{8.3.72}$$ Simplify. $$-20 = 5x \tag{8.3.73}$$ Divide by 5. $$\frac{-20}{\textcolor{red}{5}} = \frac{5x}{\textcolor{red}{5}} \tag{8.3.74}$$ Simplify. $$-4 = x \tag{8.3.75}$$ Check: Substitute: −4 = x. $$\begin{split} 3(x - 2) - 5 &= 4(2x + 1) + 5 \\ 3(\textcolor{red}{-4} - 2) - 5 &\stackrel{?}{=} 4[2(\textcolor{red}{-4}) + 1] + 5 \\ 3(-6) - 5 &\stackrel{?}{=} 4(-8 + 1) + 5 \\ -18 - 5 &\stackrel{?}{=} 4(-7) + 5 \\ -23 &\stackrel{?}{=} -28 + 5 \\ -23 &= -23\; \checkmark \end{split}$$ Exercise 8.67: Solve: 6(p − 3) − 7 = 5(4p + 3) − 12. Exercise 8.68: Solve: 8(q + 1) − 5 = 3(2q − 4) − 1. Example 8.35: Solve: \(\frac{1}{2}\)(6x − 2) = 5 − x. Solution Distribute. $$3x - 1 = 5 - x \tag{8.3.76}$$ Add x to get all the variables on the left. $$3x - 1 \textcolor{red}{+x} = 5 - x \textcolor{red}{+x} \tag{8.3.77}$$ Simplify. $$4x - 1 = 5 \tag{8.3.78}$$ Add 1 to get constants on the right. $$4x - 1 \textcolor{red}{+1} = 5 \textcolor{red}{+1} \tag{8.3.79}$$ Simplify. $$4x = 6 \tag{8.3.80}$$ Divide by 4. $$\frac{4x}{\textcolor{red}{4}} = \frac{6}{\textcolor{red}{4}} \tag{8.3.81}$$ Simplify. $$x = \frac{3}{2} \tag{8.3.82}$$ Check: Let x = \(\frac{3}{2}\). $$\begin{split} \frac{1}{2} (6x - 2) &= 5 - x \\ \frac{1}{2} \left(6 \cdot \textcolor{red}{\dfrac{3}{2}} - 2 \right) &\stackrel{?}{=} 5 - \textcolor{red}{\frac{3}{2}} \\ \frac{1}{2} (9 - 2) &\stackrel{?}{=} \frac{10}{2} - \frac{3}{2} \\ \frac{1}{2} (7) &\stackrel{?}{=} \frac{7}{2} \\ \frac{7}{2} &= \frac{7}{2}\; \checkmark \end{split}$$ Exercise 8.69: Solve: \(\frac{1}{3}\)(6u + 3) = 7 − u. Exercise 8.70: Solve: \(\frac{2}{3}\)(9x − 12) = 8 + 2x. In many applications, we will have to solve equations with decimals. The same general strategy will work for these equations. Example 8.36: Solve: 0.24(100x + 5) = 0.4(30x + 15). Solution Distribute. $$24x + 1.2 = 12x + 6 \tag{8.3.83}$$ Subtract 12x to get all the x s to the left. $$24x + 1.2 \textcolor{red}{-12x} = 12x + 6 \textcolor{red}{-12x} \tag{8.3.84}$$ Simplify. $$12x + 1.2 = 6 \tag{8.3.85}$$ Subtract 1.2 to get the constants to the right. $$12x + 1.2 \textcolor{red}{-1.2} = 6 \textcolor{red}{-1.2} \tag{8.3.86}$$ Simplify. $$12x = 4.8 \tag{8.3.87}$$ Divide. $$\frac{12x}{\textcolor{red}{12}} = \frac{4.8}{\textcolor{red}{12}} \tag{8.3.88}$$ Simplify. $$x = 0.4 \tag{8.3.89}$$ Check: Let x = 0.4. $$\begin{split} 0.24(100x + 5) &= 0.4(30x + 15) \\ 0.24[100(\textcolor{red}{0.4}) + 5] &\stackrel{?}{=} 0.4[30(\textcolor{red}{0.4}) + 15] \\ 0.24(40 + 5) &\stackrel{?}{=} 0.4(12 + 15) \\ 0.24(45) &\stackrel{?}{=} 0.4(27) \\ 10.8 &= 10.8\; \checkmark \end{split}$$ Exercise 8.71: Solve: 0.55(100n + 8) = 0.6(85n + 14). Exercise 8.72: Solve: 0.15(40m − 120) = 0.5(60m + 12). Practice Makes Perfect Solve an Equation with Constants on Both Sides In the following exercises, solve the equation for the variable. 6x − 2 = 40 7x − 8 = 34 11w + 6 = 93 14y + 7 = 91 3a + 8 = −46 4m + 9 = −23 −50 = 7n − 1 −47 = 6b + 1 25 = −9y + 7 29 = −8x − 3 −12p − 3 = 15 −14q − 15 = 13 Solve an Equation with Variables on Both Sides In the following exercises, solve the equation for the variable. 8z = 7z − 7 9k = 8k − 11 4x + 36 = 10x 6x + 27 = 9x c = −3c − 20 b = −4b − 15 5q = 44 − 6q 7z = 39 − 6z 3y + \(\frac{1}{2}\) = 2y 8x + \(\frac{3}{4}\) = 7x −12a − 8 = −16a −15r − 8 = −11r Solve an Equation with Variables and Constants on Both Sides In the following exercises, solve the equations for the variable. 6x − 15 = 5x + 3 4x − 17 = 3x + 2 26 + 8d = 9d + 11 21 + 6 f = 7 f + 14 3p − 1 = 5p − 33 8q − 5 = 5q − 20 4a + 5 = − a − 40 9c + 7 = −2c − 37 8y − 30 = −2y + 30 12x − 17 = −3x + 13 2z − 4 = 23 − z 3y − 4 = 12 − y \(\frac{5}{4}\)c − 3 = \(\frac{1}{4}\)c − 16 \(\frac{4}{3}\)m − 7 = \(\frac{1}{3}\)m − 13 8 − \(\frac{2}{5}\)q = \(\frac{3}{5}\)q + 6 11 − \(\frac{1}{4}\)a = \(\frac{3}{4}\)a + 4 \(\frac{4}{3}\)n + 9 = \(\frac{1}{3}\)n − 9 \(\frac{5}{4}\)a + 15 = \(\frac{3}{4}\)a − 5 \(\frac{1}{4}\)y + 7 = \(\frac{3}{4}\)y − 3 \(\frac{3}{5}\)p + 2 = \(\frac{4}{5}\)p − 1 14n + 8.25 = 9n + 19.60 13z + 6.45 = 8z + 23.75 2.4w − 100 = 0.8w + 28 2.7w − 80 = 1.2w + 10 5.6r + 13.1 = 3.5r + 57.2 6.6x − 18.9 = 3.4x + 54.7 Solve an Equation Using the General Strategy In the following exercises, solve the linear equation using the general strategy. 5(x + 3) = 75 4(y + 7) = 64 8 = 4(x − 3) 9 = 3(x − 3) 20(y − 8) = −60 14(y − 6) = −42 −4(2n + 1) = 16 −7(3n + 4) = 14 3(10 + 5r) = 0 8(3 + 3p) = 0 \(\frac{2}{3}\)(9c − 3) = 22 \(\frac{3}{5}\)(10x − 5) = 27 5(1.2u − 4.8) = −12 4(2.5v − 0.6) = 7.6 0.2(30n + 50) = 28 0.5(16m + 34) = −15 −(w − 6) = 24 −(t − 8) = 17 9(3a + 5) + 9 = 54 8(6b − 7) + 23 = 63 10 + 3(z + 4) = 19 13 + 2(m − 4) = 17 7 + 5(4 − q) = 12 −9 + 6(5 − k) = 12 15 − (3r + 8) = 28 18 − (9r + 7) = −16 11 − 4(y − 8) = 43 18 − 2(y − 3) = 32 9(p − 1) = 6(2p − 1) 3(4n − 1) − 2 = 8n + 3 9(2m − 3) − 8 = 4m + 7 5(x − 4) − 4x = 14 8(x − 4) − 7x = 14 5 + 6(3s − 5) = −3 + 2(8s − 1) −12 + 8(x − 5) = −4 + 3(5x − 2) 4(x − 1) − 8 = 6(3x − 2) − 7 7(2x − 5) = 8(4x − 1) − 9 Everyday Math Making a fenceJovani has a fence around the rectangular garden in his backyard. The perimeter of the fence is 150 feet. The length is 15 feet more than the width. Find the width, w, by solving the equation 150 = 2(w + 15) + 2w. Concert ticketsAt a school concert, the total value of tickets sold was $1,506. Student tickets sold for $6 and adult tickets sold for $9. The number of adult tickets sold was 5 less than 3 times the number of student tickets. Find the number of student tickets sold, s, by solving the equation 6s + 9(3s − 5) = 1506. CoinsRhonda has $1.90 in nickels and dimes. The number of dimes is one less than twice the number of nickels. Find the number of nickels, n, by solving the equation 0.05n + 0.10(2n − 1) = 1.90. FencingMicah has 74 feet of fencing to make a rectangular dog pen in his yard. He wants the length to be 25 feet more than the width. Find the length, L, by solving the equation 2L + 2(L − 25) = 74. Writing Exercises 203. When solving an equation with variables on both sides, why is it usually better to choose the side with the larger coefficient as the variable side? 204. Solve the equation 10x + 14 = −2x + 38, explaining all the steps of your solution. 205. What is the first step you take when solving the equation 3 − 7(y − 4) = 38? Explain why this is your first step. 206. Solve the equation 1 4 (8x + 20) = 3x − 4 explaining all the steps of your solution as in the examples in this section. 207. Using your own words, list the steps in the General Strategy for Solving Linear Equations. 208. Explain why you should simplify both sides of an equation as much as possible before collecting the variable terms to one side and the constant terms to the other side. Self Check (a) After completing the exercises, use this checklist to evaluate your mastery of the objectives of this section. (b) What does this checklist tell you about your mastery of this section? What steps will you take to improve? Contributors Lynn Marecek (Santa Ana College) and MaryAnne Anthony-Smith (Formerly of Santa Ana College). This content is licensed under Creative Commons Attribution License v4.0 "Download for free at http://cnx.org/contents/fd53eae1-fa2...49835c3c@5.191."
2019-09-20 08:41 Search for the $^{73}\mathrm{Ga}$ ground-state doublet splitting in the $\beta$ decay of $^{73}\mathrm{Zn}$ / Vedia, V (UCM, Madrid, Dept. Phys.) ; Paziy, V (UCM, Madrid, Dept. Phys.) ; Fraile, L M (UCM, Madrid, Dept. Phys.) ; Mach, H (UCM, Madrid, Dept. Phys. ; NCBJ, Swierk) ; Walters, W B (Maryland U., Dept. Chem.) ; Aprahamian, A (Notre Dame U.) ; Bernards, C (Cologne U. ; Yale U. (main)) ; Briz, J A (Madrid, Inst. Estructura Materia) ; Bucher, B (Notre Dame U. ; LLNL, Livermore) ; Chiara, C J (Maryland U., Dept. Chem. ; Argonne, PHY) et al. The existence of two close-lying nuclear states in $^{73}$Ga has recently been experimentally determined: a 1/2$^−$ spin-parity for the ground state was measured in a laser spectroscopy experiment, while a J$^{\pi} = 3/2^−$ level was observed in transfer reactions. This scenario is supported by Coulomb excitation studies, which set a limit for the energy splitting of 0.8 keV. [...] 2017 - 13 p. - Published in : Phys. Rev. C 96 (2017) 034311 Detailed record - Similar records 2019-09-20 08:41 Search for shape-coexisting 0$^+$ states in $^{66}$Ni from lifetime measurements / Olaizola, B (UCM, Madrid, Dept. Phys.) ; Fraile, L M (UCM, Madrid, Dept. Phys.) ; Mach, H (UCM, Madrid, Dept. Phys. ; NCBJ, Warsaw) ; Poves, A (Madrid, Autonoma U.) ; Nowacki, F (Strasbourg, IPHC) ; Aprahamian, A (Notre Dame U.) ; Briz, J A (Madrid, Inst. Estructura Materia) ; Cal-González, J (UCM, Madrid, Dept. Phys.) ; Ghiţa, D (Bucharest, IFIN-HH) ; Köster, U (Laue-Langevin Inst.) et al. The lifetime of the 0$_3^+$ state in $^{66}$Ni, two neutrons below the $N=40$ subshell gap, has been measured. The transition $B(E2;0_3^+ \rightarrow 2_1^+)$ is one of the most hindered E2 transitions in the Ni isotopic chain and it implies that, unlike $^{68}$Ni, there is a spherical structure at low excitation energy. [...] 2017 - 6 p. - Published in : Phys. Rev. C 95 (2017) 061303 Detailed record - Similar records 2019-09-17 07:00 Laser spectroscopy of neutron-rich tin isotopes: A discontinuity in charge radii across the $N=82$ shell closure / Gorges, C (Darmstadt, Tech. Hochsch.) ; Rodríguez, L V (Orsay, IPN) ; Balabanski, D L (Bucharest, IFIN-HH) ; Bissell, M L (Manchester U.) ; Blaum, K (Heidelberg, Max Planck Inst.) ; Cheal, B (Liverpool U.) ; Garcia Ruiz, R F (Leuven U. ; CERN ; Manchester U.) ; Georgiev, G (Orsay, IPN) ; Gins, W (Leuven U.) ; Heylen, H (Heidelberg, Max Planck Inst. ; CERN) et al. The change in mean-square nuclear charge radii $\delta \left \langle r^{2} \right \rangle$ along the even-A tin isotopic chain $^{108-134}$Sn has been investigated by means of collinear laser spectroscopy at ISOLDE/CERN using the atomic transitions $5p^2\ ^1S_0 \rightarrow 5p6\ s^1P_1$ and $5p^2\ ^3P_0 \rightarrow 5p6s^3 P_1$. With the determination of the charge radius of $^{134}$Sn and corrected values for some of the neutron-rich isotopes, the evolution of the charge radii across the $N=82$ shell closure is established. [...] 2019 - 7 p. - Published in : Phys. Rev. Lett. 122 (2019) 192502 Article from SCOAP3: PDF; Detailed record - Similar records 2019-09-17 07:00 Radioactive boron beams produced by isotope online mass separation at CERN-ISOLDE / Ballof, J (CERN ; Mainz U., Inst. Kernchem.) ; Seiffert, C (CERN ; Darmstadt, Tech. U.) ; Crepieux, B (CERN) ; Düllmann, Ch E (Mainz U., Inst. Kernchem. ; Darmstadt, GSI ; Helmholtz Inst., Mainz) ; Delonca, M (CERN) ; Gai, M (Connecticut U. LNS Avery Point Groton) ; Gottberg, A (CERN) ; Kröll, T (Darmstadt, Tech. U.) ; Lica, R (CERN ; Bucharest, IFIN-HH) ; Madurga Flores, M (CERN) et al. We report on the development and characterization of the first radioactive boron beams produced by the isotope mass separation online (ISOL) technique at CERN-ISOLDE. Despite the long history of the ISOL technique which exploits thick targets, boron beams have up to now not been available. [...] 2019 - 11 p. - Published in : Eur. Phys. J. A 55 (2019) 65 Fulltext from Publisher: PDF; Detailed record - Similar records 2019-09-17 07:00 Inverse odd-even staggering in nuclear charge radii and possible octupole collectivity in $^{217,218,219}\mathrm{At}$ revealed by in-source laser spectroscopy / Barzakh, A E (St. Petersburg, INP) ; Cubiss, J G (York U., England) ; Andreyev, A N (York U., England ; JAEA, Ibaraki ; CERN) ; Seliverstov, M D (St. Petersburg, INP ; York U., England) ; Andel, B (Comenius U.) ; Antalic, S (Comenius U.) ; Ascher, P (Heidelberg, Max Planck Inst.) ; Atanasov, D (Heidelberg, Max Planck Inst.) ; Beck, D (Darmstadt, GSI) ; Bieroń, J (Jagiellonian U.) et al. Hyperfine-structure parameters and isotope shifts for the 795-nm atomic transitions in $^{217,218,219}$At have been measured at CERN-ISOLDE, using the in-source resonance-ionization spectroscopy technique. Magnetic dipole and electric quadrupole moments, and changes in the nuclear mean-square charge radii, have been deduced. [...] 2019 - 9 p. - Published in : Phys. Rev. C 99 (2019) 054317 Article from SCOAP3: PDF; Detailed record - Similar records 2019-09-17 07:00 Investigation of the $\Delta n = 0$ selection rule in Gamow-Teller transitions: The $\beta$-decay of $^{207}$Hg / Berry, T A (Surrey U.) ; Podolyák, Zs (Surrey U.) ; Carroll, R J (Surrey U.) ; Lică, R (CERN ; Bucharest, IFIN-HH) ; Grawe, H ; Timofeyuk, N K (Surrey U.) ; Alexander, T (Surrey U.) ; Andreyev, A N (York U., England) ; Ansari, S (Cologne U.) ; Borge, M J G (CERN ; Madrid, Inst. Estructura Materia) et al. Gamow-Teller $\beta$ decay is forbidden if the number of nodes in the radial wave functions of the initial and final states is different. This $\Delta n=0$ requirement plays a major role in the $\beta$ decay of heavy neutron-rich nuclei, affecting the nucleosynthesis through the increased half-lives of nuclei on the astrophysical $r$-process pathway below both $Z=50$ (for $N>82$ ) and $Z=82$ (for $N>126$). [...] 2019 - 5 p. - Published in : Phys. Lett. B 793 (2019) 271-275 Article from SCOAP3: PDF; Detailed record - Similar records 2019-09-14 06:30 Precision measurements of the charge radii of potassium isotopes / Koszorús, Á (KU Leuven, Dept. Phys. Astron.) ; Yang, X F (KU Leuven, Dept. Phys. Astron. ; Peking U., SKLNPT) ; Billowes, J (Manchester U.) ; Binnersley, C L (Manchester U.) ; Bissell, M L (Manchester U.) ; Cocolios, T E (KU Leuven, Dept. Phys. Astron.) ; Farooq-Smith, G J (KU Leuven, Dept. Phys. Astron.) ; de Groote, R P (KU Leuven, Dept. Phys. Astron. ; Jyvaskyla U.) ; Flanagan, K T (Manchester U.) ; Franchoo, S (Orsay, IPN) et al. Precision nuclear charge radii measurements in the light-mass region are essential for understanding the evolution of nuclear structure, but their measurement represents a great challenge for experimental techniques. At the Collinear Resonance Ionization Spectroscopy (CRIS) setup at ISOLDE-CERN, a laser frequency calibration and monitoring system was installed and commissioned through the hyperfine spectra measurement of $^{38–47}$K. [...] 2019 - 11 p. - Published in : Phys. Rev. C 100 (2019) 034304 Article from SCOAP3: PDF; Detailed record - Similar records 2019-09-12 09:23 Evaluation of high-precision atomic masses of A ∼ 50-80 and rare-earth nuclides measured with ISOLTRAP / Huang, W J (CSNSM, Orsay ; Heidelberg, Max Planck Inst.) ; Atanasov, D (CERN) ; Audi, G (CSNSM, Orsay) ; Blaum, K (Heidelberg, Max Planck Inst.) ; Cakirli, R B (Istanbul U.) ; Herlert, A (FAIR, Darmstadt) ; Kowalska, M (CERN) ; Kreim, S (Heidelberg, Max Planck Inst. ; CERN) ; Litvinov, Yu A (Darmstadt, GSI) ; Lunney, D (CSNSM, Orsay) et al. High-precision mass measurements of stable and beta-decaying nuclides $^{52-57}$Cr, $^{55}$Mn, $^{56,59}$Fe, $^{59}$Co, $^{75, 77-79}$Ga, and the lanthanide nuclides $^{140}$Ce, $^{140}$Nd, $^{160}$Yb, $^{168}$Lu, $^{178}$Yb have been performed with the Penning-trap mass spectrometer ISOLTRAP at ISOLDE/CERN. The new data are entered into the Atomic Mass Evaluation and improve the accuracy of masses along the valley of stability, strengthening the so-called backbone. [...] 2019 - 9 p. - Published in : Eur. Phys. J. A 55 (2019) 96 Fulltext from Publisher: PDF; Detailed record - Similar records 2019-09-05 06:35 Nuclear charge radii of $^{62−80}$Zn and their dependence on cross-shell proton excitations / Xie, L (Manchester U.) ; Yang, X F (Peking U., SKLNPT ; Leuven U.) ; Wraith, C (Liverpool U.) ; Babcock, C (Liverpool U.) ; Bieroń, J (Jagiellonian U.) ; Billowes, J (Manchester U.) ; Bissell, M L (Manchester U. ; Leuven U.) ; Blaum, K (Heidelberg, Max Planck Inst.) ; Cheal, B (Liverpool U.) ; Filippin, L (U. Brussels (main)) et al. Nuclear charge radii of $^{62−80}$Zn have been determined using collinear laser spectroscopy of bunched ion beams at CERN-ISOLDE. The subtle variations of observed charge radii, both within one isotope and along the full range of neutron numbers, are found to be well described in terms of the proton excitations across the $Z=28$ shell gap, as predicted by large-scale shell model calculations. [...] 2019 - 5 p. - Published in : Phys. Lett. B 797 (2019) 134805 Article from SCOAP3: PDF; Detailed record - Similar records 2019-09-04 06:18 Electromagnetic properties of low-lying states in neutron-deficient Hg isotopes: Coulomb excitation of $^{182}$Hg, $^{184}$Hg, $^{186}$Hg and $^{188}$Hg / Wrzosek-Lipska, K (Warsaw U., Heavy Ion Lab ; Leuven U.) ; Rezynkina, K (Leuven U. ; U. Strasbourg) ; Bree, N (Leuven U.) ; Zielińska, M (Warsaw U., Heavy Ion Lab ; IRFU, Saclay) ; Gaffney, L P (Liverpool U. ; Leuven U. ; CERN ; West Scotland U.) ; Petts, A (Liverpool U.) ; Andreyev, A (Leuven U. ; York U., England) ; Bastin, B (Leuven U. ; GANIL) ; Bender, M (Lyon, IPN) ; Blazhev, A (Cologne U.) et al. The neutron-deficient mercury isotopes serve as a classical example of shape coexistence, whereby at low energy near-degenerate nuclear states characterized by different shapes appear. The electromagnetic structure of even-mass $^{182-188}$ Hg isotopes was studied using safe-energy Coulomb excitation of neutron-deficient mercury beams delivered by the REX-ISOLDE facility at CERN. [...] 2019 - 23 p. - Published in : Eur. Phys. J. A 55 (2019) 130 Fulltext: PDF; Detailed record - Similar records
Database Dependencies Reference work entry First Online: DOI:https://doi.org/10.1007/978-1-4614-8265-9_1236 Synonyms Database constraints; Data dependency Definition For a relational database to be valid, it is not sufficient that the various tables of which it is composed conform to the database schema. In addition, the instance must also conform to the intended meaning of the database [ 19]. While many aspects of this intended meaning are inherently informal, it will generally induce certain formalizable relationships between the data in the database, in the sense that whenever a certain pattern is present among the data, this pattern can either be extended or certain data values must be equal. Such a relationship is called a database dependency. The vast majority of database dependencies in the literature are of the following form [ 6]: $$ \begin{array}{ll}&{}\left(\forall {x}_1\right)\dots \left(\forall {x}_n\right)\varphi \left({x}_1,\dots, {x}_n\right)\\ &{}\Rightarrow \left(\exists {z}_1\right)\dots \left(\exists {z}_k\right)\psi \left({y}_1,\dots, {y}_m,{z}_1,\dots,... Recommended Reading 1.Abiteboul S,\enlargethispage*{1pc} Hull R, Vianu V. Foundations of databases. Reading: Addison-Wesley; 1995, (Part C).Google Scholar 2. 3.Armstrong WW. Dependency structures of data base relationships. In: Proceedings of the IFIP Congress, Information Processing 74; 1974. p. 580–3.Google Scholar 4.Beeri C, Fagin R, Howard JH. A complete axiomatization for functional and multivalued dependencies. In: Proceedings of the ACM SIGMOD International Conference on Management of Data; 1978, p. 47–61.Google Scholar 5. 6.Beeri C, Vardi MY. The implication problem for data dependencies. In: Proceedings of the International Conference on Algorithms, Languages, and Programming; 1981. Springer. p. 73–85.Google Scholar 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17.Herrmann C. Corrigendum to “On the undecidability of implications between embedded multivalued database dependencies” [Inform. and Comput. 122(1995) 221–235]. Inf Comput. 204(12):1847–51, 2006.Google Scholar 18.Kanellakis PC. Elements of relational database theory. In: Van Leeuwen J, editor. Handbook of theoretical computer science. Amsterdam: Elsevier; 1991.p. 1074–156.Google Scholar 19.Paredaens J, De Bra P, Gyssens M, Van Gucht D. The structure of the relational database model. In: Brauer W, Rozenberg G, Salomaa A, editors. EATCS monographs on theoretical computer science, vol. 17. Berlin: Springer; 1989.Google Scholar 20.Petrov SV. Finite axiomatization of languages for representation of system properties. Inf Sci. 47(3):339–72, 1989.Google Scholar 21. 22.Zaniolo C. Analysis and design of relational schemata for database systems. Ph.D. thesis, University of California at Los Angeles; 1976. Technical Report UCLA-Eng-7669.Google Scholar Copyright information © Springer Science+Business Media, LLC, part of Springer Nature 2018
Question from a Monopoly-like board game If you have played Monopoly before you must have met situations where you really want to land on a certain grid for whatever reason (to grab a set of lands, to build houses etc) right? In that case the only thing you can do it to ride you luck and hope that you diced the right number. Of course, however, in online games these can be easily negotiated. Item comes into play and give certainty on what number you can get from the dice, with some cost as well. You want to use those item wisely, so here is our model: - A circular board of 22 grids. - A fair square dice is used each round by default. Items maybe used to specify the dicing outcome. - 4 bonus grids spreads uniformly over the 21 grids (except the starting grid). One must land on that grid exactly to receive bonus. - 6 rounds in total. Note that it is theoretically possible to get all 4 bonuses regardless of the bonus distribution. Goal: we want to land on all 4 bonus grids every time, while minimizing the usage of items. * You would expect that 3~5 items are used each time: if the distance from treasure is less than 6 you have no choice but to use an item to make sure that you reach the treasure. Sometimes you have to stretch faraway enough to reach those grids. But what about the average usage of each of those numbers 1/2/3/4/5/6? Are they the same? Well, simple simulation shows that the distribution is approximately geometric. But it turns out that my average usage on all 6 items are almost the same, which is an interesting fact to investigate at. Below is my thoughts: First of all, it's natural to use a lot of 1/2 according to our distribution. At the same time if we uses lots of 1/2 then we might need to use more 5/6 as the rest of the grids are more sparsely spread. What about 3/4? This is in fact the most mysterious part in my point of view but a possible reason is that the most probable distance that requires multiple rounds is of course the 7-9 range, and there is a high chance that you will need to use 3/4 to correct your position. (For instance if the distance is 8 then there is a 4/9 chance that you will be using 3/4 once. This can be done by simply listing all possible outcomes. (x) is the correction step: 6 - (2) 5 - (3) 4 - (4) 3 - (5) 2 - 6 2 - 5 - (1) 2 - 4 - (2) 2 - 3 - (3) 2 - 2 - (4) 2 - 1 - (5) 1 - 6 - (1) 1 - 5 - (2) 1 - 4 - (3) 1 - 3 - (4) 1 - 2 - (5) 1 - 1 - (6) Therefore the chance is 2/6 + 4/36 = 4/9.) But given the geometric/exponential nature 7-9 won't happen that often after all. It is still very hard to explain my uniform usage of those items. Of course, this is not even a problem for the developers -- this is something that only the players should worry about. Imagine that the board game comes from an extremely popular online game where 24/7 grinding + unlimited potion is required to get a rank up high, and the actual usage is uneven ("the alternative hypothesis"), knowing the correct distribution would save you several minutes from going back to the shop, hence giving you an edge over other players --- Luckily the game I mentioned, is not competitive at all. Exponential effort resulting in linear growth Systems requiring exponential effort for linear growth is a canonical choice. It appears in most RPG (as well as idle games), based on the fact that exponential growth overwhelms any polynomial growth from whatever percentage bonus. The implementation is usually simple too. EXP bar that grows exponentially, item prices that grow exponentially, time requirements that grow exponentially... But are there more implicit implementation of this trick? Some may suggest an item forging system so that when you forge A into B, A gains a fraction of the power of B -- you can still see the exponential nature behind: you need $2^N$ items to boost the item $N$ times (proportionally, and if we ignore the higher terms). But recently there is a game that allows unlimited forging on the same item: on the Nth forge, 1/(N+r)k fraction of the power is merged into the item, where r and k are constants. The developer is using harmonic series smartly here, with the fact that harmonic series is asymptotic to the log function. Let us do the mathematics here: Suppose we have $2^N-r$ identical items of power 1. By forging everything into the same item, the new power is given by: $1 + \sum _{i=r}^{2^N}\frac{1}{ik} \approx 1 + k^{-1}(N\ln 2 - \ln r)$ And if we have $2^N$ items and we forge it in the usual `exponential way' we get $(1+\frac{1}{rk})^N \leq 1 + \frac{5}{4}\frac{N}{rk}$ we use the constant 5/4 as for a generous upper bound. Exponential effort is clearly necessary for linear growth. It prevents players from forging items using the usual `exponential way' as well. This is clear by checking the following equation $N \ln 2 - \ln r - \frac{5}{4}\frac{N}{r} \geq 0$ to be feasible for most reasonable $r, N$, like $(N,r) = (5,3)$.
Based on such demands, I designed a co-pilot which can: Automatically clustering the clusters; Remove periodic boundary conditions and make the center-of-mass at $(0,0,0)^T$; Adjust the view vector along the minor axis of the aggregate; Classify the aggregate. After obtaining clusters in step 1, we must remove periodic boundary conditions of the cluster. If in step 1, one uses BFS or DFS + Linked Cell List method, then one can remove periodic boundary condition during clustering; but this method has limitations, it does not work properly if the cluster is percolated throughout the box. Therefore, in this step, I use circular statistics to deal with the clusters. In periodic boundary condition simulation box, distance between an NP in an aggregate and the midpoint will never exceed $L/2$ in corresponding box dimension. Midpoint here is not center-of-mass, e.g., distance between a nozzle point and center-of-mass of an ear-syringe-bulb is clearly larger than its half length; midpoint is a actually a "de-duplicated" center-of-mass. Besides, circular mean also puts most points in the center in case of percolation. Therefore, in part 2, we have following steps: Choose a $r_\text{cut}$ to test whether the aggregate is percolate; If the aggregate is percolate, evaluate the circular mean of all points $r_c$; Set all coordinates $r$ as $r\to pbc(r-r_c)$; If the aggregate is not percolate, midpoint is evaluated by calculating circular mean of coordinates $r$ where $\rho(r)>0$, $\rho(r)$ is calculated using bin size that smaller than $r_\text{cut}$ used in step 1; Same as step 3, update coordinates; After step 5, the aggregates are unwrapped from the box, set $r\to r-\overline{r}$ to set center-of-mass at $(0,0,0)^T$ $$\alpha=\underset{\beta}{\operatorname{argmin}}\sum_i (1-\cos(\alpha_i-\beta))$$ Adjusting the view vector is simple, evaluate the eigenspace of gyration tensor as $rr^T/n$ and sort the eigenvectors by eigenvalue, i.e., $\lambda_1\ge\lambda_2\ge\lambda_3$, then the minor axis is corresponding eigenvector $v_3$, the aggregate then can be rotated by $[v_1, v_2, v_3]$ so that the minor axis is $z$-axis. The last step is a bit more tricky, the best trail I attempted was to use SVC, a binary classification method. I used about 20 samples labeled as "desired", these 20 samples were extended to 100 samples by adding some noises to the samples, e.g., moving coordinates a little bit, adding several NPs into the aggregate or removing several NPs randomly, without "breaking" the category of the morphology. Together with 100 "undesired" samples, I trained the SVC with a Gaussian kernel. The result turned out to be pretty good. I also tried to use ANN to classify all 5 categories of morphologies obtained from simulations, but ANN model did not work very well, perhaps the reason was lack of samples or the model I built was too rough. I didn't try other multi-class methods, anyway, that part of work was done, I stopped developing this co-pilot long time ago.
Key Concepts from Lecture 2 Reynolds Number Reynolds number predicts the extent of turbulence in a fluid based on how fast the fluid is flowing, the geometry of the flow (how deep and wide it is, etc.), and the density and viscosity the of the fluid. \[Re = \dfrac{\text{fluid inertial forces}}{\text{fluid viscous forces}} = \dfrac{l \times u \times \rho}{µ} \tag{3.1}\] where the variables are flow velocity ( \(u\)), characteristic length (\( l\)) which represents flow geometry, say river depth, fluid density ( \(\rho\)), and fluid viscosity (\( µ\)). For the version of the equation we are using, turbulent flow has Re is greater than 2000 and laminar flow has Re is less than 500. Flow with Re between 500 and 2000 is transitional and has some characteristics of laminar flow, but some turbulence as well. (Note that different versions of the Reynolds equation are used for different flow geometries and different equations have different numerical boundaries between laminar, transitional, and turbulent flows.) Boundary Layer and Laminar Sublayer There is boundary layer at the edge of every flow where flow speed decreases due to friction. Within the boundary layer, right next to the surface, the flow speed is very low, creating a laminar sublayer. Sediment Transport Bed Shear Stress The boundary layer determines the amount of “Bed Shear Stress” which corresponds to the forces that tend to roll particles along the bed and the pressure differences above and below the grain which tend to lift them off the bed. Bed shear stress is related to the thickness of the laminar sublayer. The narrower it is, the more bed shear stress. It also depends on the slope. If the slope is steep, gravity helps pull grains down the slope, increasing bed shear stress. Also, the roughness of the bed is a factor. A rough bed deflects flows and increases turbulence, which increases the bed shear stress, particularly in places where flow is directed into the sediment and the boundary layer is compressed. (See http://en.wikipedia.org/wiki/Sediment_transport#Bed_shear_stress for more detailed information.) The Bernoulli Effect A higher pressure above grains than below them can “pull” grains off the bed into the flow. The pressure difference comes from a difference in water (or air) speed above and below the grain. Which Grains Move? Which grains get entrained in the flow depends on their size and density (how much they weigh) because that determines the force of gravity holding them down. It also depends on the shape of the grain. A grain with a large area to experience the low pressure (like a plate) will be more susceptible to being picked up than a round grain of the same mass (although flat grains may see a smaller flow difference from top to bottom if the boundary layer is thick, and thus flat grains may experience a lower Bernoulli Effect per unit area.) The other thing that really matters is the position of a grain relative to surrounding grains. If a grain is sandwiched between larger grains, i.e. in their flow shadows, it will not experience as big a pressure difference as if it is on a flat surface. Also, if a grain is upstream of a big grain, it has to be lifted over it, so it has to experience enough force to lift it high into the flow. Thus, things can be complicated if you are trying to predict the behavior of a specific grain. However, experiments and theory provide statistically meaningful predictions for how grains behave on average. Bedload and Suspended Load Transport Two things can happen once a grain is lifted into the flow: 1) it can fall back down or 2) it can stay there. It depends on how quickly the grain settles out versus how turbulent the water is (back to Re...). Bedload refers to the grains that are transported along the sedimentary bed, e.g. grains that are rolling and being lifted off the bed, but they fall back quickly. The name bedload comes from the fact that the grains moving by traction and saltation never get too far from the bed and “load” is an engineering term for the amount of sediment transported by a river. Rolling grains are in traction. Grains that are pulled off the bed with the Bernoulli effect but are large enough that gravity causes them to fall “quickly” back to the bed are said to be saltating. (The word saltating refers to the way salt from a salt shaker bounces when it is shaken onto a hard surface. The word is derived from a Latin word meaning dance.) Bedload grains are the ones that form sedimentary structures in flowing water. Here is a playlist with movies related to sediment transport: Sumnerd’s Sed Transport Playlist Suspended sediment consists of grains that are light or small enough that they do not settle out of the water; the turbulent bursts of water keep them in the flow (see brown water in the photo). The more turbulence in the water, e.g. the higher the Reynolds number, the larger the grains in suspension will be. The upward motions of turbulent flow are faster than the rate these grains settle, so gravity is counteracted and they stay “floating” in the water even though they are denser than the water. Very small grains do not settle out of flows unless the Reynolds number is low, which means that the flows need to be either have a very low flow speed or be very shallow. YouTube video of white clay in a turbulent flow in a flume: http://tinyurl.com/78kg3z The pulsing in the flow is (probably) due to the pump that is making the water flow. Hjulstrom Diagram The flows that are required to pick up grains of certain sizes have been extensively studied in experiments and the results are plotted in Hjulstrom (or Shields) diagrams. Hjulstrom diagrams show grain entrainment on a plot of log grain size versus log flow speed. This diagram shows the areas where grains of different sizes are left on the bed, where they get moved sometimes (this is the gray zone), and where they get lifted up often and eroded away. Note that larger grains require higher flows - in general. The water speed that is required to transport a grain is call the critical velocity. This is important. If there is gravel in a sedimentary deposit, you can say that the water flow had to be above the critical threshold for it to get there! That might require a fast flowing river or strong wave action, and thus, a large part of narrowing down the depositional environment has already been done! Here is the Hjulstrom Diagram we will use (or the one in the Nichols text, which has additional information on it): The vertical axis is flow speed in cm/s (in Finnish!) and the horizontal axis is grain size in mm (in Finnish!); note that both axes are on a log scale. This diagram is for a flow depth of 1 m. If the flow speed and grain size are in the field labeled "Deposition", grains of this size will not be lifted into the flow, and if they are already moving, they will be deposited onto the sediment surface. If the flow speed and grain size are in the field labeled "Transport", grains in motion are likely to continue in motion. A few grains will be deposited and a few grains will be eroded, but there will not be a significant change in the number of grains that move. If the flow speed and grain size are in the field labeled "Erosion", more grains will be transported than deposited until the flow is transporting as many grains as it can, e.g. it is at its "carrying capacity" for sediment. The boundaries for deposition, transport and erosion shift with changing flow depth. For example, deeper flows can move larger grains at the same flow velocity because they are more turbulent: \[Re = \dfrac{l \times u \times \rho}{µ} \tag{3.1}\] and l is larger. This is because deeper flows can have larger variations in flow speed and the laminar flow layers are very thin. They can have bursts of very rapid flow relative to the average flow speed and these bursts can pick up larger grains. Actual flow characteristics are much more complex in detail than just Hjulstrom diagrams, which summarize a lot of characteristics into two axes. However, like a lot of people, we will use the diagram anyway, because it is very useful as a rule of thumb. Just remember that it does not accurately represent what will happen in detail - it represents a reasonable guess. Silt and Clay Transport Notice that for the small end of grain size, the speed of flow required for erosion actually increases. One reason small grains are hard to erode is that they tend not to stick up through the laminar sublayer; they are just too small. Thus, thinner boundary layers are necessary to roll them or for the pressure differences to pick them up off the bed. Also, the surfaces of clay minerals tend to be charged and the grains stick together. This is most obvious when big clumps of mud stick to your shoes. That just does not happen with sand (unless there is something gross in it). The stickiness of the clay grains makes them difficult to erode, so faster water flows (a greater pressure difference or larger turbulent burst down to the sediment surface) are required to move them. The smaller the grains, the more surface charges stick the grains together, thus the stronger the flow needed to erode them. The stickiness of the clay grains also depends on the amount of water between them and the mineralogy, so there is a big gray zone where a clay may or may not erode. In the Hjulstrom diagram, there is an interesting area where the flow is not strong enough to move any of the particles on the bed, but those that are in the suspended load do not settle out either. This zone includes many of the waters on the surface of the Earth. In flows with any velocity or that are very deep, Re is high enough to keep some clay in suspension. Clay deposition usually occurs very slowly, e.g. when the rate of settling is just slightly faster than the average rate at which turbulence moves clay particles upward or when the clays clump together to form larger grains (which is common when fresh and salty waters mix). Miscellaneous Notes A few more words about saltation: Saltation is a very interesting and important process in sediment transport, because the force of the impact when the grains land tends to knock new grains up into the flow even if the flow is not fast enough to lift them with the Bernoulli Effect. These new grains can kick up more grains when they land, etc. This increases the rate of sediment transport above the amount the flow can lift grains off of the bed. This is one of the causes of the gray zone in the Hjulstrom diagram at larger grain sizes. Once saltation starts, it can trigger sediment transport that would not otherwise occur. Deposition: Deposition is the accumulation of grains. If a flow starts slowly and gains speed, it will start to move larger and larger grains. As it slows down, it can only move the smaller ones. Deposition happens when a flow slows down and starts to leave grains on the bed. The combination of changing average flow speeds and local variations in flow speed caused by topography on the bed give rise to very informative sedimentary structures – including cross stratification - which are extremely useful for interpreting depositional environments.
These are homework exercises to accompany Libl's "Differential Equations for Engineering" Textmap. This is a textbook targeted for a one semester first course on differential equations, aimed at engineering students. Prerequisite for the course is the basic calculus sequence. Exercise 5.1.4: Find eigenvalues and eigenfunctions of \[y''+ \lambda y=0,~~~y(0)-y'(0)=0,~~~y(1)=0.\] Exercise 5.1.5: Expand the function \(f(x)=x\) on \(0 \leq x \leq 1\) using the eigenfunctions of the system \[y''+ \lambda y=0,~~~y'(0)=0,~~~y(1)=0.\] Exercise 5.1.6: Suppose that you had a Sturm-Liouville problem on the interval \([0,1]\) and came up with \(y_n(x)=\sin(\gamma nx)\), where \(\gamma >0\) is some constant. Decompose \(f(x)=x, 0<x<1\), in terms of these eigenfunctions. Exercise 5.1.7: Find eigenvalues and eigenfunctions of \[y'^{(4)}+ \lambda y=0,~~~y(0)=0,~~~y'(0)=0,~~~y(1)=0~~~y'(1)=0.\] This problem is not a Sturm-Liouville problem, but the idea is the same. Exercise 5.1.8 (more challenging): Find eigenvalues and eigenfunctions for \[\frac{d}{dx}(e^xy')+ \lambda e^xy=0,~~~y(0)=0,~~~y(1)=0.\] Hint: First write the system as a constant coefficient system to find general solutions. Do note that Theorem 5.1.1 guarantees \(\lambda \geq 0\). Exercise 5.1.101: Find eigenvalues and eigenfunctions of \[y''+ \lambda y=0,~~~y(-1)=0,~~~y(1)=0.\] Exercise 5.1.102: Put the following problems into the standard form for Sturm-Liouville problems, that is, find \(p(x),q(x), r(x), \alpha_1,\alpha_,\beta_1,\beta_1, \), and decide if the problems are regular or not. \(a) ~xy''+\lambda y=0\) for \(0<x<1,y(0)=0, y(1)=0,\) \(b) ~ (1+x^2)y''+2xy'+(\lambda -x^2)y=0\) for \(-1<x<1,y(-1)=0,y(1)+y'(1)=0\) Exercise 5.2.2: Suppose you have a beam of length \(5\) with free ends. Let \(y\) be the transverse deviation of the beam at position \(x\) on the beam \((0<x<5)\). You know that the constants are such that this satisfies the equation \(y_{tt}+4y_{xxxx}=0\). Suppose you know that the initial shape of the beam is the graph of \(x(5-x)\), and the initial velocity is uniformly equal to \(2\) (same for each \(x\)) in the positive \(y\) direction. Set up the equation together with the boundary and initial conditions. Just set up, do not solve. Exercise 5.2.3: Suppose you have a beam of length \(5\) with one end free and one end fixed (the fixed end is at \(x=5\)). Let \(u\) be the longitudinal deviation of the beam at position \(x\) on the beam \((0<x<5)\). You know that the constants are such that this satisfies the equation \(u_{tt}=4u_{xx}\). Suppose you know that the initial displacement of the beam is \(\frac{x-5}{50}\), and the initial velocity is \(\frac{-(x-5)}{100}\) in the positive \(u\) direction. Set up the equation together with the boundary and initial conditions. Just set up, do not solve. Exercise 5.2.4: Suppose the beam is \(L\) units long, everything else kept the same as in (5.2.2). What is the equation and the series solution? Exercise 5.2.5: Suppose you have \[ a^4y_{xxxx}+y_{tt}=0~~~~(0<x<1,t>0), \\ y(0,t)=y_{xx}(0,t)=0, \\ y(1,t)=y_{xx}(1,t)=0, \\ y(x,0)=f(x),~~~~y_t(x,0)=g(x). \] That is, you have also an initial velocity. Find a series solution. Hint: Use the same idea as we did for the wave equation. Exercise 5.2.101: Suppose you have a beam of length \(1\) with hinged ends. Let \(y\) be the transverse deviation of the beam at position \(x\) on the beam (\(0<x<1\)). You know that the constants are such that this satisfies the equation \(y_{tt}+4y_{xxxx}=0\). Suppose you know that the initial shape of the beam is the graph of \(\sin(\pi x)\), and the initial velocity is \(0\). Solve for \(y\). Exercise 5.2.102: Suppose you have a beam of length \(10\) with two fixed ends. Let be the transverse deviation of the beam at position on the beam (\(0<x<10\)). You know that the constants are such that this satisfies the equation \(y_{tt}+9y_{xxxx}=0\). Suppose you know that the initial shape of the beam is the graph of \(\sin(\pi x)\), and the initial velocity is uniformly equal to \(x(10-x)\). Set up the equation together with the boundary and initial conditions. Just set up, do not solve. Exercise 5.3.5: Suppose that the forcing function for the vibrating string is \(F_0 \sin(\omega t)\). Derive the particular solution \(y_p\). Exercise 5.3.6: Take the forced vibrating string. Suppose that \(L=1,a=1\). Suppose that the forcing function is the square wave that is \(1\) on the interval \(0<x<1\) and \(-1 \)on the interval \(-1<x<0\). Find the particular solution. Hint: You may want to use result of Exercise 5.3.5. Exercise 5.3.7: The units are cgs (centimeters-grams-seconds). For \(k=0.005, \omega =1.991 \times 10^{-7},A_0=20\). Find the depth at which the temperature variation is half (\(\pm 10\) degrees) of what it is on the surface. Exercise 5.3.8: Derive the solution for underground temperature oscillation without assuming that \(T_0=0\). Exercise 5.3.101: Take the forced vibrating string. Suppose that \(L=1,a=1\). Suppose that the forcing function is a sawtooth, that is \(|x|-\frac{1}{2}\) on \(-1<x<1\) extended periodically. Find the particular solution. Exercise 5.3.102: The units are cgs (centimeters-grams-seconds). For \(k=0.01, \omega =1.991 \times 10^{-7},A_0=25\). Find the depth at which the summer is again the hottest point.
So first let me state my homework problem: Let $X$ be a set, let $\{A_k\}$ be a sequence of subsets of $X$, let $B = \bigcup_{n=1}^{+\infty} \bigcap_{k=n}^{+\infty} A_k$, and let $C = \bigcap_{n=1}^{+\infty} \bigcup_{k=n}^{+\infty} A_k$. Show that (a) $\liminf_k\; {\xi_A}_{_k} = \xi_B$, and $(b)$ $\limsup_k \;{\chi_A}_{_k} = \chi_C.$ I know that, in the context I am familiar with, that $$\liminf_{k\rightarrow +\infty}\; X_k = \bigcup_{k=1}^{+\infty} \bigcap_{n=k}^{+\infty} X_n$$ and $$\limsup_{k\rightarrow +\infty}\;X_k = \bigcap_{k=1}^{+\infty} \bigcup_{n=k}^{+\infty} X_n.$$ I also know that the characteristics (indicator) function is defined as $\chi_A(x) = \begin{cases} 1, & x \in A \\ 0, & x \notin A .\end{cases} $ So I wrote out $B$ in some of its `glory': $B= (A_1 \cap A_2 \cap A_3 \cap \cdots) \cup (A_2 \cap A_3 \cap \cdots) \cup (A_3 \cap A_4 \cap \cdots) \cup \cdots$, and as the first argument is the smallest, with increasing size to the right, the last term in the expression for $B$ would be $B$, which would be the largest. So if I replace the $X_k$'s above with $\chi$'s I still don't see how I can get the correct answer - though it looks pretty clear from the definition of $B$ and that of $\liminf$ being basically the same, except in this case for the $\chi$. Any direction would be greatly appreciated. By the way, I have checked out limsup and liminf of a sequence of subsets of a set but I was somewhat confused by the topology, the meets/joins, etc. Thanks much, Nate
2019-10-09 06:01 HiRadMat: A facility beyond the realms of materials testing / Harden, Fiona (CERN) ; Bouvard, Aymeric (CERN) ; Charitonidis, Nikolaos (CERN) ; Kadi, Yacine (CERN)/HiRadMat experiments and facility support teams The ever-expanding requirements of high-power targets and accelerator equipment has highlighted the need for facilities capable of accommodating experiments with a diverse range of objectives. HiRadMat, a High Radiation to Materials testing facility at CERN has, throughout operation, established itself as a global user facility capable of going beyond its initial design goals. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPRB085 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPRB085 Detaljert visning - Lignende elementer 2019-10-09 06:01 Commissioning results of the tertiary beam lines for the CERN neutrino platform project / Rosenthal, Marcel (CERN) ; Booth, Alexander (U. Sussex (main) ; Fermilab) ; Charitonidis, Nikolaos (CERN) ; Chatzidaki, Panagiota (Natl. Tech. U., Athens ; Kirchhoff Inst. Phys. ; CERN) ; Karyotakis, Yannis (Annecy, LAPP) ; Nowak, Elzbieta (CERN ; AGH-UST, Cracow) ; Ortega Ruiz, Inaki (CERN) ; Sala, Paola (INFN, Milan ; CERN) For many decades the CERN North Area facility at the Super Proton Synchrotron (SPS) has delivered secondary beams to various fixed target experiments and test beams. In 2018, two new tertiary extensions of the existing beam lines, designated “H2-VLE” and “H4-VLE”, have been constructed and successfully commissioned. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW064 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW064 Detaljert visning - Lignende elementer 2019-10-09 06:00 The "Physics Beyond Colliders" projects for the CERN M2 beam / Banerjee, Dipanwita (CERN ; Illinois U., Urbana (main)) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; Cholak, Serhii (Taras Shevchenko U.) ; D'Alessandro, Gian Luigi (Royal Holloway, U. of London) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) ; Rae, Bastien (CERN) et al. Physics Beyond Colliders is an exploratory study aimed at exploiting the full scientific potential of CERN’s accelerator complex up to 2040 and its scientific infrastructure through projects complementary to the existing and possible future colliders. Within the Conventional Beam Working Group (CBWG), several projects for the M2 beam line in the CERN North Area were proposed, such as a successor for the COMPASS experiment, a muon programme for NA64 dark sector physics, and the MuonE proposal aiming at investigating the hadronic contribution to the vacuum polarisation. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW063 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW063 Detaljert visning - Lignende elementer 2019-10-09 06:00 The K12 beamline for the KLEVER experiment / Van Dijk, Maarten (CERN) ; Banerjee, Dipanwita (CERN) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; D'Alessandro, Gian Luigi (CERN) ; Doble, Niels (CERN) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) et al. The KLEVER experiment is proposed to run in the CERN ECN3 underground cavern from 2026 onward. The goal of the experiment is to measure ${\rm{BR}}(K_L \rightarrow \pi^0v\bar{v})$, which could yield information about potential new physics, by itself and in combination with the measurement of ${\rm{BR}}(K^+ \rightarrow \pi^+v\bar{v})$ of NA62. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW061 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW061 Detaljert visning - Lignende elementer 2019-09-21 06:01 Beam impact experiment of 440 GeV/p protons on superconducting wires and tapes in a cryogenic environment / Will, Andreas (KIT, Karlsruhe ; CERN) ; Bastian, Yan (CERN) ; Bernhard, Axel (KIT, Karlsruhe) ; Bonura, Marco (U. Geneva (main)) ; Bordini, Bernardo (CERN) ; Bortot, Lorenzo (CERN) ; Favre, Mathieu (CERN) ; Lindstrom, Bjorn (CERN) ; Mentink, Matthijs (CERN) ; Monteuuis, Arnaud (CERN) et al. The superconducting magnets used in high energy particle accelerators such as CERN’s LHC can be impacted by the circulating beam in case of specific failure cases. This leads to interaction of the beam particles with the magnet components, like the superconducting coils, directly or via secondary particle showers. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPTS066 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPTS066 Detaljert visning - Lignende elementer 2019-09-20 08:41 Performance study for the photon measurements of the upgraded LHCf calorimeters with Gd$_2$SiO$_5$ (GSO) scintillators / Makino, Y (Nagoya U., ISEE) ; Tiberio, A (INFN, Florence ; U. Florence (main)) ; Adriani, O (INFN, Florence ; U. Florence (main)) ; Berti, E (INFN, Florence ; U. Florence (main)) ; Bonechi, L (INFN, Florence) ; Bongi, M (INFN, Florence ; U. Florence (main)) ; Caccia, Z (INFN, Catania) ; D'Alessandro, R (INFN, Florence ; U. Florence (main)) ; Del Prete, M (INFN, Florence ; U. Florence (main)) ; Detti, S (INFN, Florence) et al. The Large Hadron Collider forward (LHCf) experiment was motivated to understand the hadronic interaction processes relevant to cosmic-ray air shower development. We have developed radiation-hard detectors with the use of Gd$_2$SiO$_5$ (GSO) scintillators for proton-proton $\sqrt{s} = 13$ TeV collisions. [...] 2017 - 22 p. - Published in : JINST 12 (2017) P03023 Detaljert visning - Lignende elementer 2019-04-09 06:05 The new CGEM Inner Tracker and the new TIGER ASIC for the BES III Experiment / Marcello, Simonetta (INFN, Turin ; Turin U.) ; Alexeev, Maxim (INFN, Turin ; Turin U.) ; Amoroso, Antonio (INFN, Turin ; Turin U.) ; Baldini Ferroli, Rinaldo (Frascati ; Beijing, Inst. High Energy Phys.) ; Bertani, Monica (Frascati) ; Bettoni, Diego (INFN, Ferrara) ; Bianchi, Fabrizio Umberto (INFN, Turin ; Turin U.) ; Calcaterra, Alessandro (Frascati) ; Canale, N (INFN, Ferrara) ; Capodiferro, Manlio (Frascati ; INFN, Rome) et al. A new detector exploiting the technology of Gas Electron Multipliers is under construction to replace the innermost drift chamber of BESIII experiment, since its efficiency is compromised owing the high luminosity of Beijing Electron Positron Collider. The new inner tracker with a cylindrical shape will deploy several new features. [...] SISSA, 2018 - 4 p. - Published in : PoS EPS-HEP2017 (2017) 505 Fulltext: PDF; External link: PoS server In : 2017 European Physical Society Conference on High Energy Physics, Venice, Italy, 05 - 12 Jul 2017, pp.505 Detaljert visning - Lignende elementer 2019-04-09 06:05 CaloCube: a new homogenous calorimeter with high-granularity for precise measurements of high-energy cosmic rays in space / Bigongiari, Gabriele (INFN, Pisa)/Calocube The direct observation of high-energy cosmic rays, up to the PeV region, will depend on highly performing calorimeters, and the physics performance will be primarily determined by their acceptance and energy resolution.Thus, it is fundamental to optimize their geometrical design, granularity, and absorption depth, with respect to the total mass of the apparatus, probably the most important constraints for a space mission. Furthermore, a calorimeter based space experiment can provide not only flux measurements but also energy spectra and particle identification to overcome some of the limitations of ground-based experiments. [...] SISSA, 2018 - 5 p. - Published in : PoS EPS-HEP2017 (2017) 481 Fulltext: PDF; External link: PoS server In : 2017 European Physical Society Conference on High Energy Physics, Venice, Italy, 05 - 12 Jul 2017, pp.481 Detaljert visning - Lignende elementer 2019-03-30 06:08 Detaljert visning - Lignende elementer 2019-03-30 06:08 Detaljert visning - Lignende elementer
In these notes (page 18, section 1.3.7) 1.3.7 Variables Over One Domain When all the variables in a formula are understood to take values from the same nonempty set, $D$, it’s conventional to omit mention of $D$. For example, instead of $\forall x \in D \ \exists y \in D.\ Q(x, y)$ we’d write $\forall x \exists y.\ Q(x, y)$ (1) . The unnamed nonempty set that $x$ and $y$ range over is called the (2) domain of discourse, or just plain domain, of the formula. It’s easy to arrange for all the variables to range over one domain. For example, Goldbach’s Conjecture could be expressed with all variables ranging over the domain $\mathbb{N}$ as $$\forall n.(n \in Evens) \implies (\exists p\ \exists q. p \in Primes \land q \in Primes \land n = p + q).$$ Now I have two questions It doesn't make sense to me that is the same thing as (2) . Omitting the name of the set makes me think that (1) is actually more general, so what are reasons that I shouldn't think that? (2) Could you please explain the second paragraph in more detail?
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
That was an excellent post and qualifies as a treasure to be found on this site! wtf wrote: When infinities arise in physics equations, it doesn't mean there's a physical infinity. It means that our physics has broken down. Our equations don't apply. I totally get that . In fact even our friend Max gets that.http://blogs.discovermagazine.com/crux/ ... g-physics/ Thanks for the link and I would have showcased it all on its own had I seen it first The point I am making is something different. I am pointing out that: All of our modern theories of physics rely ultimately on highly abstract infinitary mathematics That doesn't mean that they necessarily do; only that so far, that's how the history has worked out. I see what you mean, but as Max pointed out when describing air as seeming continuous while actually being discrete, it's easier to model a continuum than a bazillion molecules, each with functional probabilistic movements of their own. Essentially, it's taking an average and it turns out that it's pretty accurate. But what I was saying previously is that we work with the presumed ramifications of infinity, "as if" this or that were infinite, without actually ever using infinity itself. For instance, y = 1/x as x approaches infinity, then y approaches 0, but we don't actually USE infinity in any calculations, but we extrapolate. There is at the moment no credible alternative. There are attempts to build physics on constructive foundations (there are infinite objects but they can be constructed by algorithms). But not finitary principles, because to do physics you need the real numbers; and to construct the real numbers we need infinite sets. Hilbert pointed out there is a difference between boundless and infinite. For instance space is boundless as far as we can tell, but it isn't infinite in size and never will be until eternity arrives. Why can't we use the boundless assumption instead of full-blown infinity? 1) The rigorization of Newton's calculus culminated with infinitary set theory. Newton discovered his theory of gravity using calculus, which he invented for that purpose. I didn't know he developed calculus specifically to investigate gravity. Cool! It does make sense now that you mention it. However, it's well-known that Newton's formulation of calculus made no logical sense at all. If \(\Delta y\) and \(\Delta x\) are nonzero, then \(\frac{\Delta y}{\Delta x}\) isn't the derivative. And if they're both zero, then the expression makes no mathematical sense! But if we pretend that it does, then we can write down a simple law that explains apples falling to earth and the planets endlessly falling around the sun. I'm going to need some help with this one. If dx = 0, then it contains no information about the change in x, so how can anything result from it? I've always taken dx to mean a differential that is smaller than can be discerned, but still able to convey information. It seems to me that calculus couldn't work if it were based on division by zero, and that if it works, it must not be. What is it I am failing to see? I mean, it's not an issue of 0/0 making no mathematical sense, it's a philosophical issue of the nonexistence of significance because there is nothing in zero to be significant. 2) Einstein's gneral relativity uses Riemann's differential geometry. In the 1840's Bernhard Riemann developed a general theory of surfaces that could be Euclidean or very far from Euclidean. As long as they were "locally" Euclidean. Like spheres, and torii, and far weirder non-visualizable shapes. Riemann showed how to do calculus on those surfaces. 60 years later, Einstein had these crazy ideas about the nature of the universe, and the mathematician Minkowski saw that Einstein's ideas made the most mathematical sense in Riemann's framework. This is all abstract infinitary mathematics. Isn't this the same problem as previous? dx=0? 3) Fourier series link the physics of heat to the physics of the Internet; via infinite trigonometric series. In 1807 Joseph Fourier analyzed the mathematics of the distribution of heat through an iron bar. He discovered that any continuous function can be expressed as an infinite trigonometric series, which looks like this: $$f(x) = \sum_{n=0}^\infty a_n \cos(nx) + \sum_{n=1}^\infty b_n \sin(nx)$$ I only posted that because if you managed to survive high school trigonometry, it's not that hard to unpack. You're composing any motion into a sum of periodic sine and cosine waves, one wave for each whole number frequency. And this is an infinite series of real numbers, which we cannot make sense of without using infinitary math. I can't make sense of it WITH infinitary math lol! What's the cosine of infinity? What's the infnite-th 'a'? 4) Quantum theory is functional analysis . If you took linear algebra, then functional analysis can be thought of as infinite-dimensional linear algebra combined with calculus. Functional analysis studies spaces whose points are actually functions; so you can apply geometric ideas like length and angle to wild collections of functions. In that sense functional analysis actually generalizes Fourier series. Quantum mechanics is expressed in the mathematical framework of functional analysis. QM takes place in an infinite-dimensional Hilbert space. To explain Hilbert space requires a deep dive into modern infinitary math. In particular, Hilbert space is complete , meaning that it has no holes in it. It's like the real numbers and not like the rational numbers. QM rests on the mathematics of uncountable sets, in an essential way. Well, thanks to Hilbert, I've already conceded that the boundless is not the same as the infinite and if it were true that QM required infinity, then no machine nor human mind could model it. It simply must be true that open-ended finites are actually employed and underpin QM rather than true infinite spaces. Like Max said, "Not only do we lack evidence for the infinite but we don’t need the infinite to do physics. Our best computer simulations, accurately describing everything from the formation of galaxies to tomorrow’s weather to the masses of elementary particles, use only finite computer resources by treating everything as finite. So if we can do without infinity to figure out what happens next, surely nature can, too—in a way that’s more deep and elegant than the hacks we use for our computer simulations." We can *claim* physics is based on infinity, but I think it's more accurate to say *pretend* or *fool ourselves* into thinking such. Max continued with, "Our challenge as physicists is to discover this elegant way and the infinity-free equations describing it—the true laws of physics. To start this search in earnest, we need to question infinity. I’m betting that we also need to let go of it." He said, "let go of it" like we're clinging to it for some reason external to what is true. I think the reason is to be rid of god, but that's my personal opinion. Because if we can't have infinite time, then there must be a creator and yada yada. So if we cling to infinity, then we don't need the creator. Hence why Craig quotes Hilbert because his first order of business is to dispel infinity and substitute god. I applaud your effort, I really do, and I've learned a lot of history because of it, but I still cannot concede that infinity underpins anything and I'd be lying if I said I could see it. I'm not being stubborn and feel like I'm walking on eggshells being as amicable and conciliatory as possible in trying not to offend and I'm certainly ready to say "Ooooohhh... I see now", but I just don't see it. ps -- There's our buddy Hilbert again. He did many great things. William Lane Craig misuses and abuses Hilbert's popularized example of the infinite hotel to make disingenuous points about theology and in particular to argue for the existence of God. That's what I've got against Craig. Craig is no friend of mine and I was simply listening to a debate on youtube (I often let youtube autoplay like a radio) when I heard him quote Hilbert, so I dug into it and posted what I found. I'm not endorsing Craig lol 5) Cantor was led to set theory from Fourier series. In every online overview of Georg Cantor's magnificent creation of set theory, nobody ever mentions how he came upon his ideas. It's as if he woke up one day and decided to revolutionize the foundations of math and piss off his teacher and mentor Kronecker. Nothing could be further from the truth. Cantor was in fact studing Fourier's trigonometric series! One of the questions of that era was whether a given function could have more than one distinct Fourier series. To investigate this problem, Cantor had to consider the various types of sets of points on which two series could agree; or equivalently, the various sets of points on which a trigonometric series could be zero. He was thereby led to the problem of classifying various infinite sets of real numbers; and that led him to the discovery of transfinite ordinal and cardinal numbers. (Ordinals are about order in the same way that cardinals are about quantity). I still can't understand how one infinity can be bigger than another since, to be so, the smaller infinity would need to have limits which would then make it not infinity. In other words, and this is a fact that you probably will not find stated as clearly as I'm stating it here: If you begin by studying the flow of heat through an iron rod; you will inexorably discover transfinite set theory. Right, because of what Max said about the continuum model vs the actual discrete. Heat flow is actually IR light flow which is radiation from one molecule to another: a charged particle vibrates and vibrations include accelerations which cause EM radiation that emanates out in all directions; then the EM wave encounters another charged particle which causes vibration and the cycle continues until all the energy is radiated out. It's a discrete process from molecule to molecule, but is modeled as continuous for simplicity's sake. I've long taken issue with the 3 modes of heat transmission (conduction, convention, radiation) because there is only radiation. Atoms do not touch, so they can't conduct, but the van der waals force simply transfers the vibrations more quickly when atoms are sufficiently close. Convection is simply vibrating atoms in linear motion that are radiating IR light. I have many issues with physics and have often described it as more of an art than a science (hence why it's so difficult). I mean, there are pages and pages on the internet devoted to simply trying to define heat.https://www.quora.com/What-is-heat-1https://www.quora.com/What-is-meant-by-heathttps://www.quora.com/What-is-heat-in-physicshttps://www.quora.com/What-is-the-definition-of-heathttps://www.quora.com/What-distinguishes-work-and-heat Physics is a mess. What gamma rays are, depends who you ask. They could be high-frequency light or any radiation of any frequency that originated from a nucleus. But I'm digressing.... I do not know what that means in the ultimate scheme of things. But I submit that even the most ardent finitist must at least give consideration to this historical reality. It just means we're using averages rather than discrete actualities and it's close enough. I hope I've been able to explain why I completely agree with your point that infinities in physical equations don't imply the actual existence of infinities. Yet at the same time, I am pointing out that our best THEORIES of physics are invariably founded on highly infinitary math. As to what that means ... for my own part, I can't help but feel that mathematical infinity is telling us something about the world. We just don't know yet what that is. I think it means there are really no separate things and when an aspect of the universe attempts to inspect itself in order to find its fundamentals or universal truths, it will find infinity like a camera looking at its own monitor. Infinity is evidence of the continuity of the singular universe rather than an existing truly boundless thing. Infinity simply means you're looking at yourself. Anyway, great post! Please don't be mad. Everyone here values your presence and are intimidated by your obvious mathematical prowess Don't take my pushback too seriously I'd prefer if we could collaborate as colleagues rather than competing.
I'd like to quote an answer from Stack Overflow by hstoerr which covers the problem nicely:That heavily depends on the structure of the search tree and the number and location of solutions.If you know a solution is not far from the root of the tree, a breadth first search (BFS) might be better. If the tree is very deep and solutions are rare, depth ... One point that's important in our multicore world: BFS is much more easy to parallelize. This is intuitively reasonable (send off threads for each child) and can be proven to be so as well. So if you have a scenario where you can make use of parallelism, then BFS is the way to go. (I made this a community wiki. Please feel free to edit.)If$b$ is the branching factor$d$ is the depth where the solution is$h$ is the height of the tree (so, $d\le h$)ThenDFS takes $O(b^h)$ time and $O(h)$ spaceBFS takes $O(b^d)$ time and $O(b^d)$ spaceIDDFS takes $O(b^d)$ time and $O(d)$ spaceReasons to chooseDFSmust see whole tree anyway... Tree Examples (image):A: B:‾‾ ‾‾1 1/ / \2 2 3/3This is an example that fits your scenario, Tree A root׳s value is 1, having a left child with value 2, and his left child has also a left child with value 3.Tree B root׳s value is 1, ... When doing a DFS, any node is in one of three states - before being visited, during recursively visiting its descendants, and after all its descendants have been visited (returning to its parent, i.e., wrap-up phase). The three colors correspond to each of the three states. One of the reasons for mentioning colors and time of visit and return is to ... Consider the data structure used to represent the search. In a BFS, you use a queue. If you come across an unseen node, you add it to the queue.The “frontier” is the set of all nodes in the search data structure. The queue will will iterate through all nodes on the frontier sequentially, thus iterating across the breadth of the frontier. DFS will always ... Wikipedia has the answer:All types of edges appear in this picture. Trace out DFS on this graph (the nodes are explored in numerical order), and see where your intuition fails.This will explain the diagram:-Forward edge: (u, v), where v is a descendant of u, butnot a tree edge.It is a non-tree edge that connects a vertex to a descendent in a DFS-tree.... One scenario (other than finding the shortest path, which has already been mentioned) where you may have to choose one over the other to get a correct program would be infinite graphs:If we consider for example a tree where each node has a finite number of children, but the height of the tree is infinite, DFS might never find the node you're looking for - ... Breadth-first and depth-first certainly have the same worst-case behaviour (the desired node is the last one found). I suspect this is also true for averave-case if you don't have information about your graphs.One nice bonus of breadth-first search is that it finds shortest paths (in the sense of fewest edges) which may or may not be of interest.If your ... If looking for the key 60 we reach a number $K$ less than 60, we go right (where the larger numbers are) and we never meet numbers less than $K$. That argument can be repeated, so the numbers 10, 20, 40, 50 must occur along the search in that order.Similarly, if looking for the key 60 we reach a number $K$ larger than 60, we go leftt (where the smaller ... It is NP-complete to even decide whether any path exists.It is clearly possible to verify any given path is a valid path in the given graph. Thus the bounded-length problem is in NP, and so is its subset, the any-path problem.Now, to prove NP-hardness of the any-path problem (and thus of the bounded-length problem), let's reduce SAT-CNF to this problem:... Transposing the adjecency matrix $A$ does$\qquad A[i,j] = 1 \iff A^T[j,i] = 1$.In terms of graphs, that means$\qquad u \to_G v \iff v \to_{G^T} u$.In other words, transposing reverses the direction of all edges. Note that $G^T$ has the same strong components as $G$.The algorithm you are looking at is Kosaraju's algorithm. Be carful with your notion ... A path of length $n$ consists of $n$ line segments in the plane. You want to find all intersections between these line segments. This is a standard problem that has been studied in depth in the computer graphics literature.A simple algorithm is the following: for each pair of line segments, check whether they intersect (using a standard geometric ... There are exponentially many such routes.Think of a sequence of $n$ diamonds. At each diamond, you can go either left or right, independently of what you do at all other diamonds. This leads to $2^n$ paths, each of which is non-intersecting. Now the complete graph on those vertices contains all of these paths, plus some more, so this is a lower-bound on ... All parts of proving the claim hinge on 2 crucial properties of trees with undirected edges:1-connectedness (ie. between any 2 nodes in a tree there is exactly one path)any node can serve as the root of the tree.Choose an arbitrary tree node $s$. Assume $u, v \in V(G)$ are nodes with $d(u,v) = diam(G)$. Assume further that the algorithm finds a node $x$ ... A DFS traversal in an undirected graph will not leave a cross edge since all edges that are incident on a vertex are explored.However, in a directed graph, you may come across an edge that leads to a vertex that has been discovered before such that that vertex is not an ancestor or descendent of the current vertex. Such an edge is called a cross edge. The intuition behind is very easy to understand. Suppose I have to find longest path that exists between any two nodes in the given tree.After drawing some diagrams we can observe that the longest path will always occur between two leaf nodes( nodes having only one edge linked).This can also be proved by contradiction that if longest path is between two ... No, it's not limited to binary trees. Yes, pre-order and post-order can be used for $n$-ary trees. You simply replace the steps "Traverse the left subtree.... Traverse the right subtree...." in the Wikipedia article by "For each child: traverse the subtree rooted at that child by recursively calling the traversal function". We assume that the for-loop ... The book is counting the number of times each line is executed throughout the entire execution of a call of DFS, rather than the number of times it is executed in each call of the subroutine DFS-VISIT. Perhaps the following simpler example will make this clear:PROCEDURE A(n)1 global = 02 for i from 1 to n:3 B(i)4 return globalPROCEDURE B(i)1 ... The bounds $O(|V|+|E|)$ and $O(b^d)$ are talking about different things. The former is appropriate when you know what $V$ and $E$ are in advance, and they're both finite. The latter is appropriate when the graph is only defined implicitly and may be infinite, or where you've decided in advance that you're only going to search to a fixed depth.An ... This represents a difference between the kinds of problems the CS algorithms community usually uses BFS to solve, vs the kinds of problems the CS artificial intelligence community usually uses BFS to solve.The algorithms community typically is focused on the case where we have a finite graph, and where we're going to run some algorithm that probably visits ... Lets assume you consider trees of $n$ nodes. Now take any binary tree with $n$ nodes and name the nodes according to their pre-order numbering. Then clearly the pre-order sequence of the tree will be $1,2,\dots,n$.This means that we can name the nodes of any binary tree structure so that it will generate the same pre-order sequence as that of another ... Counting argumentThe number of unlabeled binary trees of $n$ nodes is the $n^\text{th}$ Catalan number $C_n=(2n)!/(n!(n+1)!).$ For example there are 5 binary trees of 3 nodes,o o o o o/ / / \ \ \o o o o o o ./ \ ... First of all, it depends a bit how you can access your data to say which algorithms works best.Anyway, I would suggest to determine the heights in a top-down fashion rather than bottom-up. I personally think that a top-down approach is conceptually nicer and easier to analyze. For any vertex $v$ in the forest it is true that$$\text{height}(v)=\begin{... A depth first search on a directed graph can yield 4 types of edges; tree, forward, back and cross edges. As we are looking at undirected graphs, it should be obvious that forward and back edges are the same thing, so the only things left to deal with are cross edges.A cross edge in a graph is an edge that goes from a vertex $v$ to another vertex $u$ such ... You basically have two choices: "cheating" by embedding a queue in the nodes, and simulating BFS with higher complexity.Embedded-Queue CheatingIf you look at virtually any description of BFS, e.g., this one on Wikipedia, then you can see that the algorithm adds attributes to nodes. E.g., the Wikipedia version adds to each node the attributes distance and ... In the long run, it's really better to understand the graph theory terminology, but for now, here is an explanation of Christofides's algorithm. I'm not an expert in this area so I can't offer much by way of intuition. Also, I should note that by now, better algorithms are known for some variants, see for example the recent survey by Vygen.We denote the ... Here is one approach. Given leaves $\alpha,\beta$, first compute the depths $d(\alpha),d(\beta)$ of both leaves (to compute the depth of a leaf, measure how many times you need to apply the parent operation until you reach the root). Suppose without loss of generality that $d(\alpha) \geq d(\beta)$. Replace $\alpha$ with $\alpha$'s $(d(\alpha) - d(\beta))$'... This is proved by Cook and McKenzie. We make use of the following notation:$\deg(v)$ is the degree of a vertex $v$.$N(v,1),\ldots,N(v,\deg(v))$ is some fixed ordering of the neighbors of $v$.We construct a sequence $v_1,v_2,\ldots$ of nodes starting with $v_1 = s$ and $v_2 = N(s,1)$ (if $\deg(s) = 0$ then $s$ is connected to $t$ iff $s = t$). Given $v_{...
Every day you are told which stocks performed best and which did not. Your stock trading program can monitor all those stocks and sort them out over whatever criteria you like. One would imagine that your stock ranking system would make you concentrate on the very best performers. However, when looking at long-term portfolio results, it should raise more than some doubts on this matter since most professional stock portfolio managers do not even outperform market averages. Comparing stock trading strategies should be made over comparables, meaning over the same duration using the same amount as initial capital. It should answer the question, is strategy $ H_k $ better than $ H_{spy} $ or not? $\quad \quad \sum (H_k \cdot \Delta P) >, \dots, > \sum (H_{spy} \cdot \Delta P) >, \dots, > \sum (H_z \cdot \Delta P)$ ? So, why is it that most long-term active trading strategies fail to beat the averages? Already, we expect half of those strategies should perform below averages almost by definition. But, why are there so many more? $\quad \quad \displaystyle{ \sum (\overline{H_k} \cdot \Delta P) < \sum (H_{spy} \cdot \Delta P)}\quad $ for $k=1 $ to some gazillion strategies out there. It is a legitimate question since you will be faced with the same problem going forward. Will you be able to outperform the averages? Are you looking for this ultimate strategy $ H_{k=5654458935527819358914774892147856} $? Or will this number require up to some 75 more digits... We have no way of knowing how many trading strategies or which ones can or will surpass the average benchmark over the long term. But, we do know that over the past, some 75$ \% $, maybe even more, have not exceeded long-term market averages. This leaves some 25$ \% $ or less that have satisfied the condition of outperforming the averages. It is from that lot that we should learn what to do to improve on our own trading strategies. We can still look at the strategies that failed in order not to follow in their footsteps. Imitating strategies that underperform over the long term is not the best starting point. It can only lead to underperforming the averages even more. We need to study and learn from the higher class of trading strategies and know why they outperformed. If we cannot understand the rationale behind such trading strategies or if none is given, then how could we ever duplicate their performance or even enhance them? We have this big blob of price data, the recorded price matrix $ \mathsf{P} $ for all the listed stocks. We can reduce it to a desirable portfolio size by selecting as many columns (stocks) and rows (days) as we like, need or want. Each price is totally described by its place in the price matrix $ p_{d,j} $. And what you want to do is find common grounds in all this data that might show some predictive abilities. You have stock prices going up or down, they usually do not maintain the same value very long. So, you are faced with a game where at any one time prices are basically moving up or down. And all you have to determine is which way they are going. How hard could that be? From the long-term outcome of professional stock portfolio managers, it does appear to be more difficult than it seems. If price predictability is low, all by itself, it would easily explain the fact that most professionals do not outperform the averages over the long term. As a direct consequence, there should be a lot of randomness in price movements. And if, or since, it is the case, then most results would tend to some expected mean which is the long-term market average return. It is easy to demonstrate the near 50/50 odds of having up and down price movements. Simply count the ups and downs days over an extended period of time over a reasonable sample. The expectation is that you will get, on average, something like 51/49 or 52/48 depending on the chosen sample. The chart below does illustrate this clearly. With those numbers, we have to accept that there is a lot of randomness in the making of those price series. It takes 100 trades to be ahead 2 or 4 trades respectively. With 1000 trades, you should be ahead by 20 or 40 trades. But, you will have to execute those 1000 trades to achieve those results. That is a 2$ \% $ or a 4$ \% $ of trades taken that will account for most of your generated profits. This says that the top 50 trades out of the 1000 taken will be responsible for most if not all your profits. And that 950 trades out of those 1000 could have been throwaways. Certainly, the 48% of trades you lost (480 trades), if they could have been scraped would definitely have helped your cause, profitwise. The problem you encounter is that you do not know which one is which, and thus, the notion of a high degree of randomness. Fortunately, it is only a high degree of randomness and not something that is totally random because there only luck could make you win the game. Here is an interesting AAPL chart snippet (taken from my 2012 Presentation). It makes that presentation something like a 7-year walk-forward with totally out-of-sample data. The hit rate on that one is very high. It is also the kind of chart we do not see on Quantopian. It was done to answer the question: are the trades executed at reasonable places in the price cycles? A simple look at the chart can answer that question. The chart displays the strategy's trading behavior with its distributed buys (blue arrows) and sells (red arrows) as the price swings up and down. On most swings, some shares are sold near tops and bought near bottoms. The chart is not displayed as a probabilistic technique, but to show some other properties. One, there was no prediction made in handling the trades, none whatsoever. The program does not know what a top or bottom is or even has a notion of mean-reversal. Nonetheless, it trades as if it knew something and does make trading profits. Second, entries and exits were performed according to the outcome of time-biased random functions. There are no factors here, no fundamental data, and no technical indicators. It operates on price alone. It does, however, have the notion of delayed gratification. An exit could be delayed following some other random functions giving a trade a time-measured probabilistic exit. Meaning that a trade could have exceeded its exit criteria but its exit could still be ignored until a later date for no other reason than it was not its lucky exit day. Third, trades were distributed over time in an entry or exit averaging process. The mindset here is to average out the average price near swing tops or bottoms. The program does not know where the tops or bottoms are but nonetheless its trade positioning process will make it have an average price near those swing tops and bottoms. Forth, the whole strategy goes on the premise of: accumulate shares over the long term and trade over the process (this is DEVX03 that gradually morphed over the years into its latest iteration DEVX10). The above chart depicts the trading process but does not show the accumulation process itself even if it is there. To accumulate shares requires that as time progresses, the stock inventory increases by some measure as prices rise. Here, the proceeds of all sales, including all the profits, are reinvested in buying more shares going forward. And this share accumulation, as well as the accumulated trading profits, will be reflected in the strategy's overall long-term CAGR performance. It is all explained in the above-cited 2012 presentation. The trading methodology itself accounts for everything. It is the method of play that determined how to make the strategy more productive. Just looking at the chart, we have to consider that there was a lot of day to day randomness in those price swings. Yet, without predicting, without technical or fundamental indicators, the strategy managed to prosper over its 5.8-year simulation (1,500 trading days). Since that 2012 presentation, AAPL has quadrupled in price and all along the strategy would have accumulated even more shares and evidently would have profited from its trading operations even more. Whereas the AMZN example would have seen its price go from 176.27 to over 1,800 today. All the while accumulating more and more shares as the shares went up in price. The strategy profited from the rise in price with a rising inventory and profited from all the trading activity. The strategy is based on old methods and does show that it can outperform the averages: $\sum (H_k \cdot \Delta P) \gg \sum (H_{spy} \cdot \Delta P) $. The major force behind strategy $ H_k $ is time. As simple as that. It waits for its trade profit. It was easy to determine some seven years ago that AAPL and AMZN would prosper going forward. We can say the same thing today for the years to come. What that program will do is continue to accumulate shares for the long term and trade over the process, and thereby continue to outperform the averages. Time is a critical factor in a trading strategy. For instance, to the question: if you waited to exit a trade, would you make a profit? To illustrate this, I made the chart below where the red lines would have shown that having picked any of the 427 days shown you would have had a losing trade. On the other side, a green line showed that picking that trading day out of the 427 days, it would have ended with a profit. As can be seen, all 427 trading days could have ended with a profit. Moreover, you could have had multiple attempts at making a profit during the trading interval. Simply picking any one day would have resulted in profits just for having waited for that profit. Nothing fancy needed for this except giving the trade some time. In the end, we all have to make choices. Some are easier than others. But one thing is sure, it will all be in your trading strategy $ H_k $ and what you designed it to do. Do do the best you can since the above does say it can be done.
Maybe this is going to seem a lot more involved than it needs to be, but it is likely that complex methods are not the best way to attack an integral like this. Nonetheless, it is possible. We consider the integral in the complex plane $$\oint_C dz \frac{\log{(1+z^2)}}{1+z^2}$$ where $C$ is some contour to be determined. Our first instinct is to make $C$ a simple semicircle in the upper half plane. The problem is that the branch point singularity at $z=i$ is extremely problematic, as it coincides with an ostensible pole. Nonetheless, the corresponding integral over the real line is finite (and twice the originally specified integral), so there must be a way to treat this. The way to go with branch points like this is to avoid them. We thus have to draw $C$ so as to do that, and then use Cauchy's theorem to state that the above complex integral about $C$ is zero. Such a contour $C$ is illustrated below. The contour integral is then taken along six different segments. I will state without proof that the integral about the two outer arcs vanishes as the radius of those arcs $R \to \infty$. We are then left with four integrals: $$\int_{-R}^R dx \frac{\log{(1+x^2)}}{1+x^2} + \left [\int_{C_-}+\int_{C_+}+\int_{C_{\epsilon}} \right ] dz \frac{\log{(1+z^2)}}{1+z^2} = 0$$ $C_-$ is the segment to the right of the imaginary axis, down from the arc to the branch point, $C_+$ is the segment to the left of the imaginary axis, up from the branch point to the arc, and $C_{\epsilon}$ is the circle about the branch point of radius $\epsilon$. It is crucial that we get the arguments of the log correct along each path. I note that the segment $C_-$ is "below" the imaginary axis and I assign the phase of this segment to be $2 \pi$, while I assign the phase of the segment $C_+$ to be $0$. For the segment $C_-$, set $z=i(1+y e^{i 2 \pi})$: $$\int_{C_-} dz \frac{\log{(1+z^2)}}{1+z^2} = i\int_R^{\epsilon} dy \frac{\log{[-y (2+y)]}+ i 2 \pi}{-y (2+y)} $$ For the segment $C_+$, set $z=i(1+y)$: $$\int_{C_-} dz \frac{\log{(1+z^2)}}{1+z^2} = i\int_{\epsilon}^R dy \frac{\log{[-y (2+y)]}}{-y (2+y)} $$ I note that the sum of the integrals along $C_-$ and $C_+$ is $$-2 \pi \int_{\epsilon}^R \frac{dy}{y (2+y)} = -\pi \left [ \log{R} - \log{(2 + R)} - \log{\epsilon} + \log{(2 + \epsilon)}\right]$$ For the segment $C_{\epsilon}$, set $z=i (1+\epsilon e^{-i \phi})$. The integral along this segment is $$\begin{align}\int_{C_{\epsilon}} dz \frac{\log{(1+z^2)}}{1+z^2} &= \epsilon \int_{-2 \pi}^0 d\phi e^{-i \phi} \frac{\log{\left [ -2 \epsilon e^{-i \phi} \right]}}{-2 \epsilon e^{-i \phi}}\end{align}$$ Here we use $\log{(-1)}=-i \pi$ and the above integral becomes $$\begin{align}\int_{C_{\epsilon}} dz \frac{\log{(1+z^2)}}{1+z^2} &= -\frac12 (-i \pi)(2 \pi) - \frac12 \log{2} (2 \pi) - \frac12 \log{\epsilon} (2 \pi) -\frac12 (-i) \frac12 (0-4 \pi^2) \\ &= -\pi \log{2} - \pi \log{\epsilon} \end{align}$$ Adding the above integrals, we have $$\begin{align}\int_{-R}^R dx \frac{\log{(1+x^2)}}{1+x^2} -\pi \log{R} + \pi \log{(2 + R)} + \pi \log{\epsilon} - \pi \log{(2 + \epsilon)} -\pi \log{2} - \pi \log{\epsilon} &= 0\\ \implies \int_{-R}^R dx \frac{\log{(1+x^2)}}{1+x^2} -\pi \log{R} + \pi \log{(2 + R)} - \pi \log{(2 + \epsilon)} -\pi \log{2} &=0\end{align}$$ Now we take the limit as $R \to \infty$ and $\epsilon \to 0$ and we get $$\int_{-\infty}^{\infty} dx \frac{\log{(1+x^2)}}{1+x^2} -2 \pi \log{2} = 0$$ Therefore $$\int_{0}^{\infty} dx \frac{\log{(1+x^2)}}{1+x^2} = \pi \log{2}$$
The ET radio signal at \(11\,{\rm GHz}\) was due to a Soviet satellite, LUX and others have found no dark matter directly, the LHC hasn't proven any deviation from the Standard Model whatsoever, and Tracy Slatyer now believes that the seemingly exciting Fermi bubbles arrive from some boring pulsars. Signs of any progress in physics through the experiment are being carefully stopped by Mother Nature. She is telling everyone: Stop with these ludicrous experiments and start to work on string theory seriously. ;-) What other anomalies get killed these days? In two 2014 articles Signal of neutrino dark matter (February, by Adam Falkowski)you could have learned about some tentative astronomical observations of X-rays with energy \(3.5\keV\), sometimes attributed to a \(7\keV\) sterile neutrino dark matter or something else that was equally new. Controversy about the \(3.5\keV\) line (August), Two weeks ago, a more conservative astro preprint (a murder attempt, if you wish) was posted and it was just published in the Astrophysical Journal: Laboratory measurements compellingly support charge-exchange mechanism for the 'dark matter' \(\sim 3.5\keV\) X-ray lineChintan Shah and 6 co-authors working in Germany and Benelux are experimenters and brought us a "prototype" of a process they claim to be responsible for the X-ray line. And because they don't assume any new particle, you may be pretty sure that according to their picture, the source of the line is notdark matter – but rather some very visible ordinary baryonic matter (which strongly interacts with the electromagnetic field). In their experiment, they have fully ionized ions of sulfur, \({\rm S}^{16+}\) and some \({\rm S}^{15+}\) ions which have one electron. I think that only one type of the ions is enough and they seem to claim that it doesn't matter much which of them is used. Both of them may play a role in the actual origin of the line if they are right. Now, these ions are interacting with some neutral molecules – it doesn't seem to matter much what molecules they are, some neutral gas. The speeds are low. And the sulfur ion simply captures one electron. They observe the \(3.47\keV\) line experimentally. What's their theory? Their theory is that the electron is first captured to a highly excited state with \(n\geq 7\) – this quantum number is the same number you know from the hydrogen atom because the quantum mechanical problems are isomorphic – and the electron collapses to \(n=1\). The emitted line is simply close to the ionization energy of the ion. As I said, it only differs from the hydrogen atom by the higher charge \(Z=+16\) of the nucleus. Because the energy levels go like \(Z^2\), I expect\[ -E_0\sim 16^2 \times 13.6\eV \sim 3.48\keV \] Good enough and the lines starting with \(n\geq 7\) approach so close to the observed \(3.5\keV\) line that it's compatible with the observations within the error margin. Well, I don't really know why they talk about the intermediate bound state with a high \(n\). Can't the electron fall from the continuum directly to \(n=1\), thus producing the \(3.5\keV\) photon as well? The most natural initial state for the captured electron could very well be chosen to be a constant wave function (or plane wave with a very low momentum, relatively to the inverse typical size of the orbits in the sulfur atoms or ions). We want the energy slightly above \(3.5\keV\), anyway, don't we? They say and it sounds plausible to me that these maximally ionized positive ions may be found between galaxies, along with the electrons in slow matter. What I don't understand is: Why sulfur? Is there something special about sulfur? Or can we observe all the X-ray and other electromagnetic emission lines from the non-sulfur elements as well? What about the sulfur's neighbors, phosphorus and chlorine? Have astronomers overlooked all the \(Z^2\times 13.6\eV\) X-ray lines? See also Phys.org.
How many rectangular $m \times n$ $(0,1)$ matrix (where $n>m$) are there with prescribed row sums $r_i$ for $i=1$ to $m$ such that no two columns are the same. The count of $m\times n$ binary matrices with specified row sums $r_i, i=1,\ldots,m$ and distinct columns can be expressed as a product: $$ n! [1, 0, \ldots ,0] ( \Pi_{i=1}^m T_i ) [0, \ldots ,0, 1]^T $$ where each $T_i$ is a sparse upper triangular matrix depending only on $n$ and $r_i$. The factor $n!$ accounts for permutations of the $n$ distinct columns. We suppress further consideration of that factor by requiring the columns to be ordered descendingly, taking the bit of an upper row to be more significant than one in a lower row. The matrix $T_i$ is the adjacency matrix of a directed multigraph on states that are partitions of the number of columns $n$, ordered by refinement, and whose edges correspond to refining one partition to another by assigning $r_i$ ones to the next row of the matrix (potentially distinguishing some columns that were identical up to that row). Note that initially (before any rows are assigned) all columns are identical, which corresponds to the trivial partition $[n]$. After all rows are assigned we will have all columns distinct, which corresponds to the slightly less trivial partition $[1,1,\ldots ,1]$. Note that this graph allows self-loops, but otherwise it has no cycles. Taking the product of matrices counts paths from one state to another, and we are interested in the count of paths from $[n]$ to $[1,1,\ldots ,1]$ as this corresponds (apart from column permutations) to the number of admissible binary matrices (specified row sums and distinct columns). Still omitting the $n!$ factor, I calculated by hand (and checked with bits of Prolog code) small examples of the form $2k \times 2k$ binary matrices with all rows sums equal $k$. For $k=1$ we get 2 solutions. For $k=2$ there are 52 solutions. For $k=3$ there are 83,680 solutions. As a practical matter we do not need to consider all possible partitions of $n$, only those that are attainable. Taking into account that the first row uniquely transitions from $[n]$ to $[r_1,n-r_1]$ reduces the matrix product by one index and limits the possible partitions. For case $k=4$ in the examples described above, only eight partitions are needed, and the transition matrix can take the form: $$ T = \begin{pmatrix} 2 & 2 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 4 & 0 & 4 & 0 & 6 & 0 & 0 \\ 0 & 0 & 6 & 0 & 0 & 12 & 0 & 1 \\ 0 & 0 & 0 & 6 & 2 & 8 & 6 & 0 \\ 0 & 0 & 0 & 0 & 10 & 0 & 20 & 0 \\ 0 & 0 & 0 & 0 & 0 & 14 & 16 & 6 \\ 0 & 0 & 0 & 0 & 0 & 0 & 30 & 20 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 70 \end{pmatrix} $$ Thus for $k=4$ we'd get (apart from the factor $8!$) a count of $(T^7)_{1,8}$ or 13,849,902,752 solutions. The usefulness of this approach will be limited by how many partitions/states are needed by given parameters $m, n, r_i$. I'd be happy to post my Prolog snippets and/or attempt a larger problem if anyone is interested.
As seawater freezes into sea ice, all of the dissolved constituents of the water become concentrated within the solid ice matrix that forms. Because it is more dense than seawater due to the high salt content, a lot of this ‘brine’ will drain from the ice by gravity. However, some brine remains in the ice down to −55°C, the eutectic point of seawater, at which point the ice transitions to a complete solid with no liquid fraction. Between the freezing point of seawater (about −2°C) and the eutectic, there will be brine with a salinity dependent on the temperature of the ice, up to about 8 times that salinity of seawater. But it’s not just salts that are concentrated, but also nutrients, particles, and microorganisms living within the seawater, including viruses and bacteria. A former student in the lab, Dr. Llyd Wells, discussed the consequences of this concentration effect in detail in a 2006 paper in Environmental Microbiology: “Modelled and measured dynamics of viruses in Arctic winter sea-ice brines“. In this paper he used a mathematical model to predict contact rates between bacteria and viruses as a function of temperature in sea ice brine, showing that “virus-bacteria contact rates in underlying −1°C seawater were … up to 600 times lower than those in ice brines at or below −24°C.” Two contrasting factors affected the relative contact rates. First, the brine concentrating effect described above, which increases contact rates by increasing the concentrations of viruses and bacteria in the ice. Second, the diffusivity decreases as a factor of increasing viscosity at lower temperatures, which decreases the contact rates. In the figure shown below, Llyd shows that the result of these contrasting effects is overall a positive one, with very high potential contact rates occurring in the upper, colder sea ice. The equations used were as follows: [tex]J = 2\pi dD_vVB[/tex] where J = contact rate, “d is the spherical diameter of the average cell (cm), D v the viral diffusivity (cm 2 s -1), and V and B the [in situ] concentrations of viruses and bacteria respectively (ml -1 [brine or seawater]).” [tex]D_v = \dfrac{kT}{3\pi \mu dv}[/tex] “where k is Boltzmann’s constant, T the temperature (Kelvin), [tex]\mu[/tex] the viscosity (g cm -1 s -1), and dv the spherical diameter of the average virus (cm)”. D v can be estimated with the following equation (determined empirically from Figure 1) where t is temperature (°C). [tex]D_v = 40.5882 \times 10^{-9} \times 10^{0.0325t}[/tex] The authors provide the following values for constants: [tex]\begin{tabular}{ccc} constant & value & units\\\hline k & 1.38 \times 10^{-16}& g cm^2 K^{-1} s^{-2}\\ d & 0.5 \times 10^{-4}& cm\\ dv & 60 \times 10^{-7} & cm\\ \end{tabular}[/tex] but they don’t provide for the calculation of [tex]\mu[/tex], the viscosity in the ice, referring to a 1975 paper by George Cox (which references a 1960 paper by Dale Kaufmann [which itself references a 1929 paper by Stakelbeck and Plank]). The following multiple linear equation can be used to estimate the viscosity (in centipoise = 0.01 * g cm -1 s [tex]\mu = -0.0835419T + 0.0066835S+1.7724989[/tex] [The raw data and R script to calculate the multiple linear regression are available here] A better empirical equation was determined using ZunZun.com, an amazingly useful site for curve fitting. I used the Function Finder, which identified a Reciprocal Polynomial as the best available curve. The simplified equation for that curve is (mu in centipoise= 0.01 * g cm -1 s -1): [tex]\mu = \dfrac{1}{0.62 + 0.020T + 0.00014T^2 -0.0012S -0.000030ST}[/tex] Finally, to calculate the relative contact rates between seawater and sea ice, given concentrations of bacteria and viruses (per volume brine or seawater): [tex]\dfrac{J_i}{J_w} = \dfrac{D_{vi}}{D_{vw}} \times \dfrac{B_i}{B_w} \times \dfrac{V_i}{V_w}[/tex] which can be generalized to: [tex]\dfrac{J_i}{J_w} = \dfrac{D_{vi}}{D_{vw}} \times \dfrac{f_B}{V_{br}} \times \dfrac{f_V}{V_{br}}[/tex] where V br is the brine volume fraction (calculator available here), “the subscripts i and w indicate sea ice and water column respectively. The terms f B and f V represent the fraction of bacteria and viruses retained in the brine and serve as a correction to account for possible partitioning within the solid phase … as well as for two major mechanisms of loss: destruction due to impinging ice crystals or osmotic stress and release with rejected brine. If passive entrainment into the ice (proportional to salts) is expected for both viruses and bacteria, then [tex]f_B = f_V = \dfrac{S_i}{S_w}[/tex], where S is the bulk salinity of the ice or water. If active entrainment into the ice is expected (complete/active concentration of bacteria and viruses into ice), then [tex]f_B = f_V = 1[/tex] If the concentration of bacteria (or viruses) in the ice is 0 then f B = 0. Another enrichment index has been used by others, including Riedel (2006), originally from Gradinger and Ikalvko (1998). Their index (I s) is 0 when the concentration in ice is 0 (I s = 0 when C i = 0) and is 1 when the concentration in ice is proportional to the salt retained in the ice (I s = 0 when C i/C w =S i/S w). [tex]I_s = \dfrac{C_i}{C_w} \dfrac{S_w}{S_i}[/tex] A third index can be created such that at a value of 0 indicates passive enrichment and a value of 1 indicates complete/active enrichment. A value less than zero indicates loss or mortality in the ice (-1 indicates an in situ concentration of 0). Any value greater than 1 indicates production or growth within the ice. [tex]E = \dfrac{\dfrac{C_i}{C_w}-\dfrac{S_i}{S_w}}{1-\dfrac{S_i}{S_w}}[/tex]
Forgot password? New user? Sign up Existing user? Log in True or False: The product rule states that to multiply two exponents with the same base, we keep the base and multiply the powers. For example, 22×23=22×3. 2 ^ 2 \times 2 ^ 3 = 2 ^ { 2 \times 3 } . 22×23=22×3. Simplify x8÷x2. \large x^8 \div x^2. x8÷x2. Which of the following is equivalent to 222−221? \large 2^{22} - 2^{21} ? 222−221? Suppose that xAx12x14x18=x. \Large x^{\color{#D61F06} {A}}x^{\frac{1}{2}}x^{\frac{1}{4}}x^{\frac{1}{8}}=x.xAx21x41x81=x. What is the value of A \color{#D61F06} {A} A? Simplify (−x)2×(−x)4. \large (-x)^2 \times (-x)^4.(−x)2×(−x)4. Problem Loading... Note Loading... Set Loading...
Given the work of Turing and Feferman all arithmetical truths can be isolated through a transfinite progression of theories like $T_0=PA$, $T_{\beta+1}=T_β \ plus \ CON(T_\beta)$ and $T\lambda=\cup T\mu(\mu\prec\lambda)$ - when $\lambda$ is a limit ordinal - through all the recursive ordinals. What is the smallest ordinal $\sigma$ such that $T_\sigma$ proves CON(ZF)? How do such ordinals for arithmetical consistency statements align with proof theoretical ordinals? Edit: My question does not ask for the proof theoretic ordinal of ZF. Update: Phillip Welch gives a very readable account of such things as I hint to in comments concerning Feferman's work in an answer to a question here: Update 2: My question was badly prepared, as evidenced also by the previous update and the comments in discussion. Noah Schweber kindly suggested that I unaccept his reply until more is clarified concerning my question as related to the Feferman style process I had in mind, and which through a detour into Shoenfield's recursive omega rule (non-constructively) captures all arithmetical truths. I would be surprised if Turing like collapses down to $\omega+1$ could occur in Feferman style processes.
Firstly I'll go over the definition of the derivative, and why the derivative of $x^n$ is $n x^{n-1}$. Then I'll try to explain what the derivative is trying to capture, and why the definition makes sense. For a function $f(x)$, its derivative is defined as being $$f'(x) = \lim_{h\to0} \frac{f(x+h)-f(x)}{h}$$If you haven't met limits before (as I suspect an A-level student may not have), the idea is that the limit tries to capture what a function looks like near a point, in particular here what $\frac{f(x+h)-f(x)}{h}$ is like near $0$, even though at $0$ it's not defined. For $f(x) = x^n$, we have $$f'(x) = \lim_{h\to0} \frac{f(x+h)-f(x)}{h}$$$$f'(x) = \lim_{h\to0} \frac{(x+h)^n-x^n}{h}$$$$f'(x) = \lim_{h\to0} \frac{x^n + n h x^{n-1} + \frac{n(n-1)}{2} h^2 x^{n-1} + ... + h^n -x^n}{h}$$$$f'(x) = \lim_{h\to0} (n x^{n-1} + \frac{n(n-1)}{2} h x^{n-1} + ... + h^{n-1})$$Now you can see that each term except the first contains an $h$, so for really small $h$ they're $0$, but the first term is independent of $h$.This gives that $$f'(x) = n x^{n-1}$$. Now let's talk about the way you should think about the derivative of a function. The derivative of a function tries to say how much the value of the function changes when you change the input by a tiny amount - and you'll consider the ratio of the change, in the same way as you'd consider a percentage change. In particular, you might want to know how much $x^2$ changes when you move from $x=2$ to $x=2.01$, for instance. Another way of thinking about this is the tangent line to the function at a point, so we might want to know what the slope of the tangent to $y=x^2$ is when $x=2$, and it doesn't take much thought to see why these are both the same idea. To look at the tangent line, it makes sense to instead consider the point $(2, 2^2)$ and another point really close to it, say $(2.01, 2.01^2)$, or $(2.0001, 2.0001^2)$, or $(2+h, (2+h)^2)$, where $h$ is really close to $0$ (possibly negative), and look at what the slope of the line connecting these two is for tiny $h$. This starts to look like the definition of the derivative, because the slope of the line connecting $(2+h, (2+h)^2)$ and $(2, 2^2)$ is $$\frac{(2+h)^2 - 2^2}{2+h-2} = \frac{(2+h)^2 - 2^2}{h}$$ and just like above, you can compute the limit of this expression to be $4$. This easily generalises to any point $x$ (instead of $2$), and to any (differentiable) function (instead of $x^2$), to give a limit as defined above. It's important to realise that the derivative is another function in itself, and the meaning of $f'(x) = 2x$ is that at the point $t$, the slope of the tangent line to $y=f(x)$ at $t$ is $2t$. The derivative of any differentiable function could, in theory, be computed directly from the limit, but that's often tedious. So, we use tricks like linearity (sometimes called the sum rule), the product and quotient rules, and the chain rule. It might be instructive to try to prove linearity and the product rule yourself, directly from the definition of the limit.
Unitarily invariant norm and Q-norm estimations for the Moore--Penrose inverse of multiplicative perturbations of matrices Juan Luo Keywords:Moore-Penorse inverse; multiplicative perturbation; unitarily invariant norm; $Q$-norm Abstract: Let $B$ be a multiplicative perturbation of $A\in\mathbb{C}^{m\times n}$ given by$B=D_1^* A D_2$, where $D_1\in\mathbb{C}^{m\times m}$ and $D_2\in\mathbb{C}^{n\times n}$ are both nonsingular. New upper bounds for$\Vert B^\dag-A^\dag\Vert_U$ and $\Vert B^\dag-A^\dag\Vert_Q$ are derived, where $A^\dag,B^\dag$ are the Moore-Penrose inverses of $A$ and $B$, and $\Vert \cdot\Vert_U,\Vert \cdot\Vert_Q$ are any unitarily invariant norm and $Q$-norm, respectively. Numerical examples are provided to illustrate the sharpness of the obtained upper bounds.
Let $M$ be a closed connected manifold and fix a basepoint $q \in M$ and a Riemannian metric on $M$. Let $F(M)$ denote the orthonormal frame bundle of $M$. This is a principal $O(n)$-bundle over $M$ ($n = \dim M$). The homotopy sequence of this bundle reads $$\dots \to \pi_2(M,q) \to \pi_1(O(n),I) \to \pi_1(F(M),F_0) \to \pi_1(M,q) \to \pi_0(O(n),I)\to\dots$$ where $F_0$ is a fixed frame over $q$. Let $\pi_1^{\text{or}}(M,q)$ be the kernel of the penultimate arrow; this is the subgroup of $\pi_1M$ represented by "orientable" loops. Then we have the group extension $$0 \to A \to \pi_1(F(M),F_0) \to \pi_1^{\text{or}}(M,q) \to 1$$ where $A$ is the quotient of $\pi_1(O(n),I)$ by the image of $\pi_2(M,q)$. Assume $A \simeq \mathbb Z_2$ (which happens for $n \geq 3$ and $w_2(M) = 0$, I think). Then this is a central group extension and therefore we have the associated class in the second group cohomology $H^2(\pi_1^{\text{or}}(M,q);\mathbb Z_2)$. What can be said about this class in terms of the topology of $M$? Does it have to do anything with Stiefel-Whitney classes? When is it trivial? It seems that when $M$ is orientable and Spin, then this class is indeed trivial; moreover different splittings of this extension correspond to different Spin structures. A reformulation of this is as follows (I'm adding this just to give an additional perspective): the above fiber bundle gives rise to the associated based loop space fibration $$\Omega SO(n) \to \Omega F(M) \to \Omega^{\text{or}}M$$ and the group extension is obtained by looking at the last terms of the corresponding homotopy sequence. And lastly, it looks like this should be understood in the broader context of principal bundles and characteristic classes. If someone can point me in the direction of understanding this or give a reference, that'd be great.
...and a few other deviations... ATLAS and CMS have combined their Higgs production-and-decay analyses and many numbers agree with the SM predictions within 2 sigma. However, numerous measured numbers don't agree so well. Fifth force:First, off-topic, Natalie Wolchover wrote a helpful article about the Hungarian claims of a new 17 MeV boson, pointing out that the researchers in Debrecen (the town that gave the name to the popular Czech Debrecen ham) have "discovered" a dozen of similar bosons in recent years and they have viewed null results as a "failure", thus proving their lack of integrity. Look at page 22, second paragraph from the end. The most interesting paragraph starts with "The \(p\)-value". You are immediately shown a quantity that deviates from the Standard Model by 3 sigma:\[ \frac{\sigma_{ttH}/\sigma_{ggF}}{\text{the same ratio in SM}} = 3.3\pm 0.9 \]Note that the right hand side would be "one" if there were a "perfect" agreement of the theory with the observations. That doesn't mean that the cross sections are the same; it's one because the cross sections were divided by one another in the same way. However, the measured cross section ratio is 3.3 times higher. So either the \(ttH\) production (top-top-Higgs) seems more frequent than predicted; or \(ggF\) production (gluon fusion) seems less frequent than predicted. We are told that "multi-lepton categories" contribute to this deviation. That's not the only similar ratio that looks off. An analogous cross section ratio\[ \frac{\sigma_{ZH}/\sigma_{ggF}}{\text{the same ratio in SM}} = 3.2\pm 1.4 \] boasts some 2-sigmaish excess, mainly due to the \(ZH\to ZWW\) subchannel. Also, the ratio of branching ratios\[ \frac{B^{bb}/B^{ZZ}}{\text{the same ratio in SM}} = 0.19\pm 0.21 \] which they say to be a 2.5-sigma deficit relatively to the value one, i.e. the Standard Model. So the Higgs seems to decay to a bottom quark-antiquark pair less frequently or to the electroweak boson pairs more frequently. This deviation in the ratio of branching ratios seems very strong on Figure 9. (I have always indicated the symmetric \(\pm\) error margins. But the paper always reports asymmetric ones and to simplify things, I have chosen the unified error margin from the "more relevant side", the side towards the theoretical prediction.) Now, it is very interesting and such deviations may suggest a more complicated Higgs sector than the Standard Model Higgs sector. Maybe, there is least one other Higgs boson which "specializes" in those bottom decays and takes some work from the \(125.09\GeV\) Higgs. The supersymmetric standard models and their extensions could probably have an explanation if these decays were real. Note that only the \(\sqrt{s}=7\TeV\), \(8\TeV\) data from 2011 (5/fb) and 2012 (20/fb) have been used. One complaint is that they evaluated various ratiosof cross sections and branching ratios. There are many ratios you may consider and some of them are bound to deviate more than others. If you take the cross section with the greatest excess and divide it by the cross section with the greatest deficit, you will increase the excess even more. So the numerous possible ratios increase the look-elsewhere effect and I am not sure whether this extra look-elsewhere effect has been used to "punish" the declared confidence levels. Why do they talk about the ratios and not the individual cross sections and branching ratios themselves? It's because if one takes the ratios, the theoretical uncertaintyis almost zero. I don't know why it's so important because some of these are quoted to be as low as 5 percent etc. I wouldn't get carried away by similar 2-sigma and 3-sigma excesses, especially because they are deviations in rather artificial quantities we weren't thinking too much about previously. But as always, some of these 3-sigma and maybe even 2-sigma deviations may turn out to be real. The LHC detectors have already collected 4/fb (2015) plus 3/fb (2016) of the collisions at \(13\TeV\) which may strengthen or weaken all of these deviations.
65 4 Homework Statement Two identical audio speakers, connected to the same amplifier, produce monochromatic sound waves with a frequency that can be varied between 300 and 600 Hz. The speed of the sound is 340 m/s. You find that, where you are standing, you hear minimum intensity sound a) Explain why you hear minimum-intensity sound b) If one of the speakers is moved 39.8 cm toward you, the sound you hear has maximum intensity. What is the frequency of the sound? c) How much closer to you from the position in part (b) must the speaker be moved to the next position where you hear maximum intensity? Homework Equations interference Homework Statement:Two identical audio speakers, connected to the same amplifier, produce monochromatic sound waves with a frequency that can be varied between 300 and 600 Hz. The speed of the sound is 340 m/s. You find that, where you are standing, you hear minimum intensity sound a) Explain why you hear minimum-intensity sound b) If one of the speakers is moved 39.8 cm toward you, the sound you hear has maximum intensity. What is the frequency of the sound? c) How much closer to you from the position in part (b) must the speaker be moved to the next position where you hear maximum intensity? Homework Equations:interference I have no idea on how to proceed I started with ## frequency=\frac {speed\space of\space sound} \lambda \space = \frac {340 \frac m s} \lambda ## then ##d \space sin\alpha \space = \space \frac \lambda 2\space ## but now i'm stuck Any help please?
A word (i.e., ordered string of letters) is bifix-free provided it has no proper initial string and terminal string that are identical. For example, the word $ingratiating$ has bifix $ing$, but the word $ingratiate$ is bifix-free. Let $a_n^{(q)}$ be the number of bifix-free words over a fixed $q$-letter alphabet. The generating function $f(x) = f^{(q)}(x) = \sum_{n = 0}^{\infty} a_nx^n$ satisfies the functional equation $$f(x) = qx + qxf(x) - f(x^2).$$ Solving for $f(x)$ gives $$f(x) = \frac{qx - f(x^2)}{1-qx} = \cdots = \sum_{j=0}^{\infty} \frac{(-1)^jqx^{2^j}}{\prod_{k=0}^{j} (1 - qx^{2^k})}.$$ (Q1) Is there a "nice" solution to this functional equation, perhaps one that does not involve an infinite product or sum? Here is a plot of $f^{(2)}(x)$ generated in Sage: The limit as $n \to \infty$ of the probability that a uniformly random word of length $n$ is bifix-free is $1 - f^{(q)}(q^{-2})$. From the initial three terms in the alternating series above, as $q \to \infty$, $$f^{(q)}(q^{-2}) = \frac{1}{q-1} - \frac{1+o(1)}{q^3}.$$ (Q2) What else can be said specifically about $f^{(q)}(q^{-2})$ for integers $q \geq 2$? Is there a simpler formula than the infinite series above? Are these values rational? ...algebraic? Here is a plot of $f^{(x)}(x^{-2})$ generated in Sage: (The combinatorics-on-words paper associated with this function is here.)
Much has changed in the last month. I moved to Rhode Island and began grad school. That’s a pretty big change. I’m suddenly much more focused in my studies again (undeniably a good thing), figuring out what I will do. Solid. And I’m struggling to get acquainted with the curious structure of classes at Brown. There are many, many calculus classes here, for example. As a tutor, I’m somewhat expected to know these things. Classes on differentiation, integration, fast-paced and (maybe) slow-paced versions, calc I but with vectors incorporated – all of these fell under the blanket heading of Calc I at Georgia Tech. And my feelings are mixed. It’s an interesting idea. The general freedom to make mistakes at Brown is something that I firmly stand behind, though. But there is one thing that I think is very poorly done – why is there not more interdepartmental cooperation? Brown is an ivy, and we’re close to many other schools that are excellent at many things in math. Why is there no form of cooperation between these universities? This is something that I absolutely must change. Somehow. I’ll work on that. Ok, let’s actually do some math. I recently came across a fun paper, The Fundamental Theorem of Algebra: A Most Elementary Proof, by Oswaldo Rio Branco de Oliveira on proving the Fundamental Theorem of Algebra with no bells, whistles, or ballyhoo in general. All that is assumed is the Bolzano-Weierstrass Theorem and that polynomials are continuous. Here is the gist of the proof. THEOREM:Let $latex P(z) = a_0 + a_1 z + … + a_n z^n, a_n \not = 0,$ be a complex polynomial with degree $latex n geq 1$. Then P has a zero. Proof: We have that $latex |P| \geq | |a_n| |z|^n – |a_{n-1}||z|^{n-1} – … – |a_0||z||$, and so $latex \lim_{|z| \to \infty} |P(z)| = \infty$. By continuity, $latex |P|$ has a global minimum at some $z_0 \in \mathbb{C}$. We suppose wlog that $latex z_0 = 0$. Then $latex |P(z)|^2 – |P(0)|^2 \geq 0 \forall z \in \mathbb{C}$. Then we may write $latex P(z) = P(0) + z^k Q(z)$ for some $latex k \in \{1, …, n \}$, and where $latex Q$ is a polynomial and $latex Q(0) \neq 0$ (the idea being that one factored that part out already). Pick some $latex \zeta \in \mathbb{C}$ and substitute $latex z = r \zeta, r \geq 0$ into the above inequality and dividing by $latex r^k$, we get: $latex 2 \mathrm{Re} [ \overline{P(0)} \zeta ^k Q(r \zeta)] + r^k |\zeta ^k Q(r \zeta )|^2 \geq 0 \forall r > 0, \forall \zeta$. The left side is a continuous function of r for nonnegative r, and so taking the limit as $latex r \to 0$, one finds $latex 2 \mathrm{Re} [ \overline{P(0)} \zeta ^k Q(0)] \geq 0, \forall \zeta$. Now suppose $latex \alpha := \overline{P(0)}Q(0) = a + b i$. For $latex k$ odd, setting $latex \zeta = \pm 1$ and $latex \zeta = \pm i$ in this inequality lets us conclude that $latex a = b = 0$. So then we have $latex P(0) = 0$, and the odd case is complete. Now before I go on, I give a brief lemma, which I’ll not prove here. But it just requires using binomial expansions and keeping track of lots of exponents and factorials. Lemma(credit given to Estermann): For $latex \zeta = \left( 1 + \frac{i}{k} \right)^2$ and $latex k \geq 2$, even, we have that $latex \mathrm{Re}[\zeta ^k] < 0 < \mathrm{Im} [\zeta ^k]$ For k even, we don’t have the handy cancellation that we used above. But let us choose $latex \zeta$ as in this lemma, and write $latex \zeta ^k = x + iy; \quad x < 0, y > o$. Then we can substitute $latex \zeta ^ k$ and $latex \overline{ \zeta ^k}$ in the inequality, and a little work shows that $latex \mathrm{Re}[\alpha (x \pm iy)] = ax \mp by \geq 0$. So $latex ax \geq 0$ and since $latex x < 0$, we see $latex a \leq 0$. But then $latex a = 0$. Similarly, we get $latex b = 0$ after considering $\mp by \geq 0$. Then we again see that $P(0) = 0$, and the theorem is proved as long as you believe Etermann’s Lemma. I like it when such results have relatively simple proofs. The first time I came across the FTA, we used lots of machinery to prove it. Some integration and differentiation on series, in particular. And now that I’m vaguely settled and now that I see new things routinely, perhaps I’ll update this more. Not at Tao-pace, but something.
Matrix factorization is a simple embedding model. Given the feedback matrix A \(\in R^{m \times n}\), where \(m\) is the number of users (or queries) and \(n\) is the number of items, the model learns: A user embedding matrix \(U \in \mathbb R^{m \times d}\), where row i is the embedding for user i. An item embedding matrix \(V \in \mathbb R^{n \times d}\), where row j is the embedding for item j. The embeddings are learned such that the product \(U V^T\) is a good approximation of the feedback matrix A. Observe that the \((i, j)\) entry of \(U . V^T\) is simply the dot product \(\langle U_i, V_j\rangle\) of the embeddings of user \(i\) and item \(j\), which you want to be close to \(A_{i, j}\). Choosing the Objective Function One intuitive objective function is the squared distance. To do this, minimize the sum of squared errors over all pairs of observed entries: \[\min_{U \in \mathbb R^{m \times d},\ V \in \mathbb R^{n \times d}} \sum_{(i, j) \in \text{obs}} (A_{ij} - \langle U_{i}, V_{j} \rangle)^2.\] In this objective function, you only sum over observed pairs (i, j), that is, over non-zero values in the feedback matrix. However, only summing over values of one is not a good idea—a matrix of all ones will have a minimal loss and produce a model that can't make effective recommendations and that generalizes poorly. Perhaps you could treat the unobserved values as zero, and sum over all entries in the matrix. This corresponds to minimizing the squared Frobenius distance between \(A\) and its approximation \(U V^T\): \[\min_{U \in \mathbb R^{m \times d},\ V \in \mathbb R^{n \times d}} \|A - U V^T\|_F^2.\] You can solve this quadratic problem through Singular Value Decomposition ( SVD) of the matrix. However,SVD is not a great solution either, because in real applications, thematrix \(A\) may be very sparse. For example, think of all the videoson YouTube compared to all the videos a particular user has viewed.The solution \(UV^T\) (which corresponds to the model's approximationof the input matrix) will likely be close to zero, leading to poorgeneralization performance. In contrast, Weighted Matrix Factorization decomposes the objectiveinto the following two sums: A sum over observed entries. A sum over unobserved entries (treated as zeroes). \[\min_{U \in \mathbb R^{m \times d},\ V \in \mathbb R^{n \times d}} \sum_{(i, j) \in \text{obs}} (A_{ij} - \langle U_{i}, V_{j} \rangle)^2 + w_0 \sum_{(i, j) \not \in \text{obs}} (\langle U_i, V_j\rangle)^2.\] Here, \(w_0\) is a hyperparameter that weights the two terms so that the objective is not dominated by one or the other. Tuning this hyperparameter is very important. \[\sum_{(i, j) \in \text{obs}} w_{i, j} (A_{i, j} - \langle U_i, V_j \rangle)^2 + w_0 \sum_{i, j \not \in \text{obs}} \langle U_i, V_j \rangle^2\] where \(w_{i, j}\) is a function of the frequency of query i and item j. Minimizing the Objective Function Common algorithms to minimize the objective function include: Stochastic gradient descent (SGD)is a generic method to minimize loss functions. Weighted Alternating Least Squares( WALS) is specialized to this particular objective. The objective is quadratic in each of the two matrices U and V. (Note, however, that the problem is not jointly convex.) WALS works by initializing the embeddings randomly, then alternating between: Fixing \(U\) and solving for \(V\). Fixing \(V\) and solving for \(U\). Each stage can be solved exactly (via solution of a linear system) and can be distributed. This technique is guaranteed to converge because each step is guaranteed to decrease the loss. SGD vs. WALS SGD and WALS have advantages and disadvantages. Review the information below to see how they compare: SGD Very flexible—can use other loss functions. Can be parallelized. Slower—does not converge as quickly. Harder to handle the unobserved entries (need to use negative sampling or gravity). WALS Reliant on Loss Squares only. Can be parallelized. Converges faster than SGD. Easier to handle unobserved entries.
I'm trying to convert my Latex document + equations to word document and I'm using Mathtype. I've tried some of the .tex to word converts, htlatex etc and it just doesn't work and its going to be quicker to do it myself. However all my latex documents use my own commands via "newcommand" so that I can change symbols thought the entire document quickly. For example: \newcommands{\partial_flux}{\phi}\newcommands{\Real_Space}{\mathbb{R}}\newcommands{\Cartesian_coord_1}{x} Then my equation would be something like: \begin{equation}\begin{aligned} \partial_flux(\Cartesian_coord_1) \in \Real_Space\end{aligned} \end{equation} With open office there is a package called TexMath and its allows one to give it the list of \newcommands under the tab called "preamble". Then one simply pastes the latex equations, clicks the convert button and it all works great. I cannot use Open office and must instead use word, so is there a way in which I can do this within word?
How does one evaluate the following product if the set S happens to be empty? \begin{aligned} f(n)= n \prod_{x \in S} \left(1-\frac{1}{x}\right) \end{aligned} Is the value simply n or is it undefined (or zero)?? Thanks. Edit: It seems rather odd that this question has been rated off-topic for lacking context or other details. I would have thought it rather obvious that it was about how to evaluate the product when there is no x due to an empty set. I would have guessed undefined because one cannot assign a value to $(1-1/x)$. However, as shown by C.Falcon, the convention is $1$. There's no other context or missing details. Feel free to delete if it doesn't meet the relevant standards.
By Dr Adam Falkowski (Résonaances; Orsay, France) The title of this post is purposely over-optimistic in order to increase the traffic. A more accurate statement is that a recent analysis of X-ray spectrum of galactic clusters claims the presence of a monochromatic \(3.5\keV\) photon line which can be interpreted as a signal of a\[ Detection of An Unidentified Emission Line in the Stacked X-ray spectrum of Galaxy Clustersby Esra Bulbul and 5 co-authors (NASA/Harvard-Smithsonian) \large{m_{\nu({\rm ster})} = 7\keV } \]sterile neutrino dark matter candidate decaying into a photon and an ordinary neutrino. It's a long way before this claim may become a well-established signal. Nevertheless, in my opinion, it's not the least believable hint of dark matter coming from astrophysics in recent years. First, let me explain why one would anyone dirty their hands to study X-ray spectra. In the most popular scenario the dark matter particle is a WIMP — a particle in the \(\GeV\)-\(\TeV\) mass ballpark that has weak-strength interactions with the ordinary matter. This scenario may predict signals in gamma rays, high-energy anti-protons, electrons etc, and these are being searched high and low by several Earth-based and satellite experiments. But in principle the mass of the dark matter particle could be anywhere between \(10^{-30}\) and \(10^{50}\GeV\), and there are many other models of dark matter on the market. One serious alternative to WIMPs is a \(\keV\)-mass sterile neutrino. In general, neutrinos aredark matter: they are stable, electrically neutral, and are produced in the early universe. However we know that the 3 neutrinos from the Standard Model constitute only a small fraction of dark matter, as otherwise they would affect the large-scale structure of the universe in a way that is inconsistent with observations. The story is different if the 3 "active" neutrinos have partners from beyond the Standard Model that do not interact with W- and Z-bosons — the so-called "sterile" neutrinos. In fact, the simplest UV-complete models that generate masses for the active neutrinos require introducing at least 2 sterile neutrinos, so there are good reasons to believe that these guys exist. A sterile neutrino is a good dark matter candidate if its mass is larger than \(1\keV\) (because of the constraints from the large-scale structure) and if its lifetime is longer than the age of the universe. How can we see if this is the right model? Dark matter that has no interactions with the visible matter seems hopeless. Fortunately, sterile neutrino dark matter is expected to decay and produce a smoking-gun signal in the form of a monochromatic photon line. This is because, in order to be produced in the early universe, the sterile neutrino should mix slightly with the active ones. In that case, oscillations of the active neutrinos into sterile ones in the primordial plasma can populate the number density of sterile neutrinos, and by this mechanism it is possible to explain the observed relic density of dark matter. But the same mixing will make the sterile neutrino decay, as shown in the diagrams here. If the sterile neutrino is light enough and/or the mixing is small enough then its lifetime can be much longer than the age of the universe, and then it remains a viable dark matter candidate. The tree-level decay into 3 ordinary neutrinos is undetectable, but the 2-body loop decay into a photon and and a neutrino results in production of photons with the energy\[ \large{E=\frac{m_{\rm DM}}{2}.} \] Such a monochromatic photon line can potentially be observed. In fact, in the simplest models sterile neutrino dark matter heavier than \(\approx 50\keV\) would produce a too large photon flux and is excluded. Thus the favored mass range for dark matter is between \(1\) and \(50\keV\). Then the photon line is predicted to fall into the X-ray domain that can be studied using X-ray satellites like XMM-Newton, Chandra, or Suzaku. Until last week these searches were only providing lower limits on the lifetime of sterile neutrino dark matter. This paper claims they may have hit the jackpot. The paper use the XMM-Newton data to analyze the stacked X-ray spectra of many galaxy clusters where dark matter is lurking. After subtracting the background they see is this: Although the natural reaction here is a loud "are you kidding me", the claim is that the excess near \(3.56\keV\) (red data points) over the background model is very significant, at 4-5 astrophysical sigma. It is difficult to assign this excess to any know emission lines from usual atomic transitions. If interpreted as the signal of sterile neutrino dark matter, the measured energy and the flux corresponds to the red star in the plot, with the mass \(7.1\keV\) and the mixing angle of order \(5\times 10^{-5}\). This is allowed by other constraints and, by twiddling with the lepton asymmetry in the neutrino sector, consistent with the observed dark matter relic density. Clearly, a lot could go wrong with this analysis. For one thing, the suspected dark matter line doesn't stand alone in the spectrum. The background mentioned above consists not only of continuous X-ray emission but also of monochromatic lines from known atomic transitions. Indeed, the \(2\)-\(10\keV\) range where the search was performed is pooped with emission lines: the authors fit 28 separate lines to the observed spectrum before finding the unexpected residue at \(3.56\keV\). The results depend on whether these other emission lines are modeled properly. Moreover, the known argon XVII dielectronic recombination line happens to be nearby at \(3.62\keV\). The significance of the signal decreases when the flux from that line is allowed to be larger than predicted by models. So this analysis needs to be confirmed by other groups and by more data before we really get excited. Decay diagrams borrowed from this review. For more up-to-date limits on sterile neutrino DM see this paper, or this plot. Update: another independent analysis of XMM-Newton data observes the anomalous 3.5 keV line in the Andromeda and the Perseus cluster. The text was reposted from Adam's blog with his permission...
Difference between revisions of "Demand/Dynamic User Assignment" (link to gawron publication) (using vtypes from additional file) (14 intermediate revisions by 4 users not shown) Line 1: Line 1: + + + + + + + + The tool '' {{SUMO}}/tools/assign/duaIterate.py '' can be used to compute the (approximate) dynamic user equilibrium. The tool '' {{SUMO}}/tools/assign/duaIterate.py '' can be used to compute the (approximate) dynamic user equilibrium. {{Caution|This script will require copious amounts of disk space}} {{Caution|This script will require copious amounts of disk space}} Line 4: Line 12: python duaIterate.py -n '''''<network-file>''''' -t '''''<trip-file>''''' -l '''''<nr-of-iterations>''''' python duaIterate.py -n '''''<network-file>''''' -t '''''<trip-file>''''' -l '''''<nr-of-iterations>''''' − + − This script tries to calculate a user equilibrium, that is, it tries to find a route for each vehicle (each trip from the trip-file above) such that each vehicle cannot reduce its travel cost (usually the travel time) by using a different route. It does so iteratively (hence the name) by + + + This script tries to calculate a user equilibrium, that is, it tries to find a route for each vehicle (each trip from the trip-file above) such that each vehicle cannot reduce its travel cost (usually the travel time) by using a different route. It does so iteratively (hence the name) by + + the vehicles in a network with edge costs (travel ) + the calculated routesedge costs are the . + + The number of iterations fixed . In order to ensure convergence there are different methods employed to calculate the route choice probability from the route cost (so the vehicle does not always choose the "cheapest" route). In general, new routes will be added by the router to the route set of each vehicle in each iteration (at least if none of the present routes is the "cheapest") and may be chosen according to the route choice mechanisms described below + + + + + . − + Gawron () + The for a of + the the in the step + for a set of routes + probability a − = Logit = + = Logit = The Logit mechanism applies a fixed formula to each route to calculate the new probability. It ignores old costs and old probabilities and takes the route cost directly as the sum of the edge costs from the last simulation. The Logit mechanism applies a fixed formula to each route to calculate the new probability. It ignores old costs and old probabilities and takes the route cost directly as the sum of the edge costs from the last simulation. Line 18: Line 42: <math>p_r' = \frac{\exp(\theta c_r')}{\sum_{s\in R}\exp(\theta c_s')}</math> <math>p_r' = \frac{\exp(\theta c_r')}{\sum_{s\in R}\exp(\theta c_s')}</math> − = + − = + + + + + + + + + + + + + + + + + = = + + + + + = = + Latest revision as of 13:02, 12 February 2018 Contents Introduction For a given set of vehicles with of origin-destination relations (trips), the simulation must determine routes through the network (list of edges) that are used to reach the destination from the origin edge. The simplest method to find these routes is by computing shortest or fastest routes through the network using a routing algorithm such as Djikstra or A*. These algorithms require assumptions regarding the travel time for each network edge which is commonly not known before running the simulation due to the fact that travel times depend on the number of vehicles in the network. . The problem of determining suitable routes that take into account travel times in a traffic-loaded network is called user assignment.SUMO provides different tools to solve this problem and they are described below. Iterative Assignment ( Dynamic User Equilibrium) The tool <SUMO_HOME> /tools/assign/duaIterate.py can be used to compute the (approximate) dynamic user equilibrium. python duaIterate.py -n -t <network-file> -l <trip-file> <nr-of-iterations> duaIterate.py supports many of the same options as SUMO. Any options not listed when calling duaIterate.py --help can be passed to SUMO by adding sumo--long-option-name arg after the regular options (i.e. sumo--step-length 0.5. This script tries to calculate a user equilibrium, that is, it tries to find a route for each vehicle (each trip from the trip-file above) such that each vehicle cannot reduce its travel cost (usually the travel time) by using a different route. It does so iteratively (hence the name) by calling DUAROUTER to route the vehicles in a network with the last known edge costs (starting with empty-network travel times) calling SUMO to simulate "real" travel times result from the calculated routes. The result edge costs are used in the net routing step. The number of iterations may be set to a fixed number of determined dynamically depending on the used options. In order to ensure convergence there are different methods employed to calculate the route choice probability from the route cost (so the vehicle does not always choose the "cheapest" route). In general, new routes will be added by the router to the route set of each vehicle in each iteration (at least if none of the present routes is the "cheapest") and may be chosen according to the route choice mechanisms described below. Between successive calls of DUAROUTER, the .rou.alt.xml format is used to record not only the current best route but also previously computed alternative routes. These routes are collected within a route distribution and used when deciding the actual route to drive in the next simulation step. This isn't always the one with the currently lowest cost but is rather sampled from the distribution of alternative routes by a configurable algorithm described below. Route-Choice algorithm The two methods which are implemented are called Gawron and Logit in the following. The input for each of the methods is a weight or cost function on the edges of the net, coming from the simulation or default costs (in the first step or for edges which have not been traveled yet), and a set of routes where each route has an old cost and an old probability (from the last iteration) and needs a new cost and a new probability . Gawron (default) The Gawron algorithm computes probabilities for chosing from a set of alterantive routes for each driver. The following values are considered to compute these probabilities: the travel time along the used route in the previous simulation step the sum of edge travel times for a set of alternative routes the previous probability of chosing a route Logit The Logit mechanism applies a fixed formula to each route to calculate the new probability. It ignores old costs and old probabilities and takes the route cost directly as the sum of the edge costs from the last simulation. The probabilities are calculated from an exponential function with parameter scaled by the sum over all route values: Termination The option --max-convergence-deviation may be used to detect convergence and abort iterations automatically. Otherwise, a fixed number of iterations is used. Once the script finishes any of the resulting .rou.xml files may be used for simulation but the last one(s) should be the best. Usage Examples Loading vehicle types from an additional file By default, vehicle types are taken from the input trip file and are then propagated through DUAROUTER iterations (always as part of the written route file). In order to use vehicle type definitions from an additional-file, further options must be set duaIterate.py -n ... -t ... -l ... --additional-file <FILE_WITH_VTYPES>duarouter--aditional-file <FILE_WITH_VTYPES>duarouter--vtype-output dummy.xml Options preceeded by the string duarouter-- are passed directly to duarouter and the option vtype-output dummy.xml must be used to prevent duplicate definition of vehicle types in the generated output files. oneShot-assignment An alternative to the iterative user assignment above is incremental assignment. This happens automatically when using <trip> input directly in SUMO instead of <vehicle>s with pre-defined routes. In this case each vehicle will compute a fastest-path computation at the time of departure which prevents all vehicles from driving blindly into the same jam and works pretty well empirically (for larger scenarios). The routes for this incremental assignment are computed using the Automatic Routing / Routing Device mechanism. Since this device allows for various configuration options, the script Tools/Assign#one-shot.py may be used to automatically try different parameter settings. The MAROUTER application computes a classic macroscopic assignment. It employs mathematical functions (resistive functions) that approximate travel time increases when increasing flow. This allows to compute an iterative assignment without the need for time-consuming microscopic simulation.
At least three times now, I have needed to use that Hurwitz Zeta functions are a sum of L-functions and its converse, only to have forgotten how it goes. And unfortunately, the current wikipedia article on the Hurwitz Zeta function has a mistake, omitting the $varphi$ term (although it will soon be corrected). Instead of re-doing it each time, I write this detail here, below the fold. The Hurwitz zeta function, for complex $latex {s}$ and real $latex {0 < a \leq 1}$ is $latex {\zeta(s,a) := \displaystyle \sum_{n = 0}^\infty \frac{1}{(n + a)^s}}$. A Dirichlet L-function is a function $latex {L(s, \chi) = \displaystyle \sum_{n = 1}^\infty \frac{\chi (n)}{n^s}}$, where $latex {\chi}$ is a Dirichlet character. This note contains a few proofs of the following relations: Lemma 1 $latex \displaystyle \zeta(s, l/k) = \frac{k^s}{\varphi (k)} \sum_{\chi \mod k} \bar{\chi} (l) L(s, \chi) \ \ \ \ \ (1)$ $latex \displaystyle L(s, \chi) = \frac{1}{k^s} \sum_{n = 1}^k \chi(n) \zeta(s, \frac{n}{k}) \ \ \ \ \ (2)$ Proof: We start by considering $latex {L(s, \chi)}$ for a Dirichlet Character $latex {\chi \mod k}$. We multiply by $latex {\bar{\chi}(l)}$ for some $latex {l}$ that is relatively prime to $latex {k}$ and sum over the different $latex {\chi \mod k}$ to get $latex \displaystyle \sum_\chi \bar{\chi}(l) L(s,\chi)$ We then expand the L-function and sum over $latex {\chi}$ first. $latex \displaystyle \sum_\chi \bar{\chi}(l) L(s,\chi)= \sum_\chi \bar{\chi} (l) \sum_n \frac{\chi(n)}{n^s} = \sum_n \sum_\chi \left( \bar{\chi}(l) \chi(n) \right) n^{-s}= $ $latex \displaystyle = \sum_{\substack{ n > 0 \\ n \equiv l \mod k}} \varphi(k) n^{-s}$ In this last line, we used a fact commonly referred to as the “Orthogonality of Characters” , which says exactly that $latex {\displaystyle \sum_{\chi \mod k} \bar{\chi}(l) \chi{n} = \begin{cases} \varphi(k) & n \equiv l \mod k \\ 0 & \text{else} \end{cases}}$. What are the values of $latex {n > 0, n \equiv l \mod k}$? They start $latex {l, k + l, 2k+l, \ldots}$. If we were to factor out a $latex {k}$, we would get $latex {l/k, 1 + l/k, 2 + l/k, \ldots}$. So we continue to get $latex \displaystyle = \sum_{\substack{ n > 0 \\ n \equiv l \mod k}} \varphi(k) n^{-s} = \varphi(k) \sum_n \frac{1}{k^s} \frac{1}{(n + l/k)^s} = \frac{\varphi(k)}{k^s} \zeta(s, l/k) \ \ \ \ \ (3)$ Rearranging the sides, we get that $latex \displaystyle \zeta(s, l/k) = \frac{k^s}{\varphi(k)} \sum_{\chi \mod k} \bar{\chi}(l) L(s, \chi)$ To write $latex {L(s,\chi)}$ as a sum of Hurwitz zeta functions, we multiply by $latex {\chi(l)}$ and sum across $latex {l}$. Since $latex {\chi(l) \bar{\chi}(l) = 1}$, the sum on the right disappears, yielding a factor of $latex {\varphi(k)}$ since there are $latex {\varphi(k)}$ characters $latex {\mod k}$. $latex \Box$ I’d like to end that the exact same idea can be used to first show that an L-function is a sum of Hurwitz zeta functions and to then conclude the converse using the heart of the idea for of equation 3. Further, this document was typed up using latex2wp, which I cannot recommend highly enough.
I was consider the set of linear operators: $$O_{a,k} = \frac{f(ax^k) - f(x)}{ax^k - x} $$' Particularly I am looking for the closed forms of the eigenfunctions of this operator, that is the functions f (dependent on parameters, a,k,x) such that $$ O_{a,x}(f) = f$$ Work So Far: It's trivial to show that $O_{a,k}(c) = 0$ for any constant c, and thus we intuitively find that: $O_{a,k}(x) = 1$. This naturally motivates the intuition that $$ f = 1 + x + q_1(x) + q_2(x) + ... $$ Such that $$ O_{a,k}[q_i(x)] = q_{i-1}(x) \ \text{and} \ O_{a,k}[q_1(x)] = x $$ Furthermore from the linearity of O it follows that $O_{a,k}(tf) = tO_{a,k}(f)$ for constants t. We thus consider: $$ O_{a,k}^{-1}(x^\mu) $$ Which by definition is the solution to $$O_{a,k} = \frac{g(ax^k) - g(x)}{ax^k - x} = x^{\mu} $$ From here it follows that $$g(ax^k) - g(x) = ax^{\mu + k} - x^{\mu+1}$$ Which trivially leads to $$ g = \left( a\left( \frac{x}{a}\right)^{\frac{\mu + k}{k}} - \left( \frac{x}{a}\right)^{\frac{\mu + 1}{k}} \right) + \left( a\left( \frac{ \left(\frac{x}{a} \right)^{\frac{1}{k}}}{a}\right)^{\frac{\mu + k}{k}} - \left( \frac{\left(\frac{x}{a}\right)^{\frac{1}{k}}}{a}\right)^{\frac{\mu + 1}{k}} \right) ... $$ That reduces to $$ g = \sum_{i=1}^{\infty} \left[ \frac{1}{a}\left( \frac{x^{\frac{\mu}{k^{i+1}}}}{a^{\frac{\left( \frac{1}{k} - 1 \right)^{i+1}}{\frac{1}{k}-1}}} - \frac{x^{\frac{\mu}{k^{i}}}}{a^{\frac{\left( \frac{1}{k} - 1 \right)^{i}}{\frac{1}{k}-1}}} \right)x^{\frac{1}{k^i}} \right]$$ This has an extremely elegant series formation which is due to the "power series in the power" structure of the term coefficients. Interestingly every item in this series is itself a constant times a power of x and therefore the same formula can be recursively re-applied to each individual term, infinitely many times , for the case $\mu = 1$ to recover the definition of $f$ that we were originally considering. Other Observations: The case of k = 1 recovers back the the Jackson Q derivative and can thus be held down by the theory of Q series. If k = 1, and a= 1 we recover the traditional derivative from Calculus I. The eigenfunction f is respectively the q exponential and traditional exponential for each.
What is the greatest number of points of intersection that can occur when 2 different circles and 2 different straight lines are drawn on the same piece of paper? What is the greatest number of points of intersection that can occur when 2 different circles and 2 different straight lines are drawn on the same piece of paper? \(\text{Let $n$ the number of circles }\\ \text{Let $m$ the number of straight lines }\) \(\begin{array}{|rcll|} \hline n &=& 2\\ m &=& 2 \\\\ && \dfrac{1}{2}\cdot m(m-1)+n(2m+n-1)\\ &=& \dfrac{1}{2}\cdot 2(2-1)+2(2\cdot 2+2-1) \\ &=& 1 +2\cdot 5 \\ &\mathbf{=}& \mathbf{11} \\ \hline \end{array} \) The greatest number of points of intersection is 11
4.2: Maxima and Minima Exercises 90) In precalculus, you learned a formula for the position of the maximum or minimum of a quadratic equation \(y=ax^2+bx+c\), which was \(m=−\frac{b}{(2a)}\). Prove this formula using calculus. 91) If you are finding an absolute minimum over an interval \([a,b],\) why do you need to check the endpoints? Draw a graph that supports your hypothesis. Solution: Answers may vary 92) If you are examining a function over an interval \((a,b),\) for \(a\) and \(b\) finite, is it possible not to have an absolute maximum or absolute minimum? 93) When you are checking for critical points, explain why you also need to determine points where \(f(x)\) is undefined. Draw a graph to support your explanation. Solution: Answers will vary 94) Can you have a finite absolute maximum for \(y=ax^2+bx+c\) over \((−∞,∞)\)? Explain why or why not using graphical arguments. 95) Can you have a finite absolute maximum for \(y=ax^3+bx^2+cx+d\) over \((−∞,∞)\) assuming a is non-zero? Explain why or why not using graphical arguments. Answer: No; answers will vary 96) Let \(m\) be the number of local minima and \(M\) be the number of local maxima. Can you create a function where \(M>m+2\)? Draw a graph to support your explanation. 97) Is it possible to have more than one absolute maximum? Use a graphical argument to prove your hypothesis. Answer: Since the absolute maximum is the function (output) value rather than the x value, the answer is no; answers will vary 98) Is it possible to have no absolute minimum or maximum for a function? If so, construct such a function. If not, explain why this is not possible. 99) [T] Graph the function \(y=e^{ax}.\) For which values of \(a\), on any infinite domain, will you have an absolute minimum and absolute maximum? Answer: When \(a=0\) For the following exercises, determine where the local and absolute maxima and minima occur on the graph given. Assume domains are closed intervals unless otherwise specified. 100) 101) Answer: Absolute minimum at 3; Absolute maximum at −2.2; local minima at −2, 1; local maxima at −1, 2 102) 103) Answer: Absolute minima at −2, 2; absolute maxima at −2.5, 2.5; local minimum at 0; local maxima at −1, 1 For the following problems, draw graphs of \(f(x),\) which is continuous, over the interval \([−4,4]\) with the following properties: 104) Absolute maximum at \(x=2\) and absolute minima at \(x=±3\) 105) Absolute minimum at \(x=1\) and absolute maximum at \(x=2\) Solution: Answers may vary. 106) Absolute maximum at \(x=4,\) absolute minimum at \(x=−1,\) local maximum at \(x=−2,\) and a critical point that is not a maximum or minimum at \(x=2\) 107) Absolute maxima at \(x=2\) and \(x=−3\), local minimum at \(x=1\), and absolute minimum at \(x=4\) Solution: Answers may vary. For the following exercises, find the critical points in the domains of the following functions. 108) \(y=4x^3−3x\) 109) \(y=4\sqrt{x}−x^2\) Answer: \(x=1\) 110) \(y=\frac{1}{x−1}\) 111) \(y=ln(x−2)\) Answer: None 112) \(y=tan(x)\) 113) \(y=\sqrt{4−x^2}\) Answer: \(x=0\) 114) \(y=x^{3/2}−3x^{5/2}\) 115) \(y=\frac{x^2−1}{x^2+2x−3}\) Answer: None 116) \(y=sin^2(x)\) 117) \(y=x+\frac{1}{x}\) Answer: \(x=−1,1\) For the following exercises, find the absolute maxima and minima for the functions over the specified domain. 118) \(f(x)=x^2+3\) over \([−1,4]\) 119) \(y=x^2+\frac{2}{x}\) over \([1,4]\) Answer: Absolute maximum is \(\frac{33}{2}\) at \(x=4\); absolute minimum is \(3\) at \(x=1\) 120) \(y=(x−x^2)^2\) over \([−1,1]\) 121) \(y=\frac{1}{x−x^2}\) over \((0,1)\) Answer: Absolute minimum: \((\frac{1}{2}, 4)\) 122) \(y=\sqrt{9−x}\) over \([1,9]\) 123) \(y=x+sin(x)\) over \([0,2π]\) Answer: Absolute maximum: \((2π, 2π);\) absolute minimum: \((0, 0)\) 124) \(y=\frac{x}{1+x}\) over \([0,100]\) 125) \(y=|x+1|+|x−1|\) over \([−3,2]\) Answer: Absolute maximum: \(x=−3;\) absolute minimum: \(−1≤x≤1, y=2\) 126) \(y=\sqrt{x}−\sqrt{x^3}\) over \([0,4]\) 127) \(y=sinx+cosx\) over \([0,2π]\) Answer: Absolute maximum is \(\sqrt{2}\) at \(x=\frac{π}{4}\); absolute minimum is \(−\sqrt{2}\) at \(x=\frac{5π}{4}\) 128) \(y=4sinθ−3cosθ\) over \([0,2π]\) For the following exercises, find the local and absolute minima and maxima (as ordered pairs) for the functions over \((−∞,∞).\) 129) \(y=x^2+4x+5\) Answer: Absolute minimum: \(x=−2, y=1\) 130) \(y=x^3−12x\) 131) \(y=3x^4+8x^3−18x^2\) Answer: Absolute minimum: \((-3, −135)\); local maximum: \((0, 0)\); local minimum: \((1,−7)\) 132) \(y=x^3(1−x)^6\) 133) \(y=\frac{x^2+x+6}{x−1}\) Answer: Local maximum: \((1−2\sqrt{2}, 3−4\sqrt{2})\); local minimum: \((1+2\sqrt{2}, 3+4\sqrt{2})\) 134) \(y=\frac{x^2−1}{x−1}\) For the following functions, use a calculator to graph the function and to estimate the absolute and local maxima and minima. Then, solve for them explicitly. 135) [T] \(y=3x\sqrt{1−x^2}\) Answer: Absolute maximum: \(x=\frac{\sqrt{2}}{2}, y=\frac{3}{2};\) absolute minimum: \(x=−\frac{\sqrt{2}}{2}, y=−\frac{3}{2}\) 136) [T] \(y=x+sin(x)\) 137) [T] \(y=12x^5+45x^4+20x^3−90x^2−120x+3\) Answer: Local maximum: \(x=−2,y=59\); local minimum: \(x=1, y=−130\) 138) [T] \(y=\frac{x^3+6x^2−x−30}{x−2}\) 139) [T] \(y=\frac{\sqrt{4−x^2}}{\sqrt{4+x^2}}\) Answer: Absolute maximum: \(x=0, y=1;\) absolute minimum: \(x=−2,2, y=0\) 140) A company that produces cell phones has a cost function of \(C=x^2−1200x+36,400,\) where \(C\) is cost in dollars and \(x\) is number of cell phones produced (in thousands). How many units of cell phone (in thousands) minimizes this cost function? 141) A ball is thrown into the air and its position is given by \(h(t)=−4.9t^2+60t+5m.\) Find the height at which the ball stops ascending. How long after it is thrown does this happen? Answer: \(h=\frac{9245}{49}m, t=\frac{300}{49}s\) For the following exercises, consider the production of gold during the California gold rush (1848–1888). The production of gold can be modeled by \(G(t)=\frac{(25t)}{(t^2+16)}\), where t is the number of years since the rush began \((0≤t≤40)\) and \(G\) is ounces of gold produced (in millions). A summary of the data is shown in the following figure. 142) Find when the maximum (local and global) gold production occurred, and the amount of gold produced during that maximum. 143) Find when the minimum (local and global) gold production occurred. What was the amount of gold produced during this minimum? Answer: The global minimum was in 1848, when no gold was produced. Find the critical points, maxima, and minima for the following piecewise functions. 144) \(y=\begin{cases}x^2−4x& 0≤x≤1//x^2−4&1<x≤2\end{cases}\) 145) \(y=\begin{cases}x^2+1 & x≤1 // x^2−4x+5 & x>1\end{cases}\) Answer: Absolute minima: \(x=0, x=2, y=1\); local maximum at \(x=1, y=2\) For the following exercises, find the critical points of the following generic functions. Are they maxima, minima, or neither? State the necessary conditions. 146) \(y=ax^2+bx+c,\) given that \(a>0\) 147) \(y=(x−1)^a\), given that \(a>1\) Answer: No maxima/minima if \(a\) is odd, minimum at \(x=1\) if \(a\) is even
Recently there are a series of electronic games mainly on mobile platforms that collects various cards for the game system. There can be a lot of variation on the game like tower defence, card battling, action shooting etc. but these aren't really important. They can all characterized to be a card collection game (CCG). What elements must a card collection game contain? A free way to obtain cards through events, and a paying way to obtain cards, that we call it an invocation. A basic model would contain a card pool where every time a card is chosen, at a certain probability and given to the player. In order to attract the players to spend further, we can usually find some alternate option to invoke cards that apparently a better option. A typical way is to guarantee a rarer card when the invocation is done in a larger batch. Without a further assumption in the rate of appearance for the rare cards we ought to see which option is better. In this entry we will simply our model to two types of cards: common and rare cards. A binary model would allow binomial distribution and make everything easier. Since the number of cards is finite, and of course countable, the following calculations will be done with respect to discrete distributions. Order statistics Suppose we have a distribution $X$. Recall that its CDF $F_X$ is given by $F(x) = P(X\leq x) = \sum _{y\leq x} P(X=y)$. Furthermore let $X_1,X_2,...$ be an independent process $\sim X$. What is the distribution of $Y = min(X_1,...,X_n)$? This is easy: $F_Y(x) = P(Y\leq x) = 1 - P(Y\geq x) = 1 - P(X_1\geq x\cap X_2\geq x \cap ... \cap X_n\geq x) = 1 - (1-F_X(x))^n$. Similarly we want to find $Z = max(X_1,...,X_n)$: $F_Z(x) = P(Z\leq x) = P(X_1\leq x \cap X_2\leq x \cap ... \cap X_n\leq x) = \prod P(X_i\leq x) = F_X(x)^n$. We can now deal with the first typical system: the re-invocation. Re-invocation The idea is that you first draw from the card pool. If the result is not satisfying you may re-invoke. Without loss of generality (oh, of course) we assume the re-invocation assumes the same rate as the first invocation. First we handle the single case. Let $X$ be a binomial variable where $P(X=1) = p$ (rare cards) and $P(X=0) = 1-p$ (common cards). Let $X'$ be the consequent re-invocation $\sim X$. Since we only have two (types of) cards, we only re-invoke only when we get a common card. Then the rate of getting rare cards under a re-invocation system is $P(rare) = P(X=1)+P(X'=1\mid X=0) = p+p(1-p) = p(2-p) = q$. As we are comparing with a batch of 10 cards, the expactance over 10 times of single process is given by $E(X_1+...+X_{10}) = \sum _{k=1}^{10}kC^{10}_kq^k(1-q)^{10-k}$ But since this an independent process, it can be simply written as $E(\sum X_i) = \sum E(X_i) = 10E(X) = 10p(2-p)$. It is a very smooth quadratic curve and there is nothing too special about it. But what if we consider some 'special offer' that comes in a batch? Let's consider the following offer: you pay 10 times as a single invocation and get 10 cards, one of them is a guaranteed rare card and the rest is rare at a rate of $p$. The re-invocation is done over the whole batch. Let's call it a mega. It is hard to calculate the expectance because it is hard to say when we need to re-invoke. A simple way is to find the median so if it is lower than the median and we invoke we have larger than 50% to get a better result. However median is hard to calculate for asymmetric binomial distribution is not easy, so we use the mean instead. If we get under $[9p+1]$ rare cards we will re-invoke. First we find the expectance over a mega without re-invocation: $E = \sum _{k=0}^9 (k+1)C^9_kp^k(1-p)^{9-k}$ Then using re-invocation: $E' = E\sum _{k=0}^{[9p]}C^9_kp^k(1-p)^{9-k}+\sum _{k=[9p+1]}^{9}(k+1)C^9_kp^k(1-p)^{9-k}$ Of course we sum over well-defined indices only. In some game, such offer gives 11 cards (like, Lovelive) instead of 10. A straightforward reason is that single re-invocation works better! One can follow the code at the bottom to plot the following: Considering the guaranteed rarity is not that high, $p$ is actually not very low, say $p=0.25$. At this rate, making 10 single invocations is considerably better than doing a mega! Of course we assumed that the rate $p$ for single and mega are equal here. It is always possible for the developers to adjust such rate according to such calculation. Super-rare cards Another problem is that we want a super-rare card instead of a rare card. A non-common card maybe rare or super rare and both can possibly be found in the guaranteed slot. Let $X$ be a discrete distribution that can be 0 (common), 1 (rare) or 2 (super-rare) for common slot and $Y$ can be 1 or 2 for the guaranteed slot. We further assume that $P(Y=2) = P(X=2\mid X\geq 1)$ To set up the parameter we let $P(X=1) = p$ and $P(Y=2) = r$. For $P(X=2)=q$ we have $q = \frac{pr}{1-r}$. Now assume the invocation criteria solely depends on the super-rare card (if there is a rare card that we want besides of the super-rare slot, just count that as super-rare too. This is possible in our model). This is the same for single invocation. Let $s = q(2-q)$, the expectance is: $E = \sum _{k=1}^{10}kC^{10}_ks^k(1-s)^{10-k}$ For mega let the mean be $m = [r+9q]$ then $E = r + \sum _{k=1}^9 kC^9_kq^k(1-q)^{9-k}$ $E' = r(E\sum _{k=0}^{m-1}C^9_kq^k(1-q)^{9-k}+\sum _{k=m}^9(k+1)C^9_kq^k(1-q)^{9-k})+(1-r)(E\sum_{k=0}^m C^9_kq^k(1-q)^{9-k}+\sum _{k=m+1}^9 kC^9_kq^k(1-q)^{9-k})$ Now let's assume $p+q = 0.3$ and vary $q$ around $0.01$. As you would expect, it is just like a portion of the first graph, which is locally linear. Furthermore mega is always better than making successive single invocations. Again as you would expect, the two lines will intersect as $q$ increases further. With $p+q = 0.3$ they intersect at $q \approx 0.0414$ (you can plot yourself for that). That's how mega invocation works: you spent more at one time to get a better rate for the top cards. We don't call the rarity 4% super-rare, do we? * As a last comment reader shall notice that if we didn't assume $\frac{q}{p} = \frac{r}{1-r}$ and let the parameter flowing freely everything can be arbitrarily set up and the calculation does not make much significance. However it is a pretty sensible assumption that allows us to make models and interpolate, if possible, against the real data. Attached is a section of simple code that directly calculates the expectation using the above formula without any simplification. #Assumed C(n,r) the combination coefficient function def single(x,n): y = x*(2-x) s = 0 for k in range(n+1): s += k*(y**k)*(1-y)**(n-k)*C(n,k) return s def mega(x,n): if x == 0: return 1 elif x == 1: return n p = int((n-1)*x+1) s0 = 0 for k in range(n): s0 += (k+1)*x**k*(1-x)**(n-1-k)*C(n-1,k) s1 = 0 for k in range(p): s1 += x**k*(1-x)**(n-1-k)*C(n-1,k) s1 *= s0 for k in range(p,n): s1 += (k+1)*x**k*(1-x)**(n-1-k)*C(n-1,k) return s1 def megaSR(p,r,n): q = p*r/(1-r) m = int(r+9*q) E0 = r for k in range(n): E0 += k*C(n-1,k)*q**k*(1-q)**(n-1-k) t = 0 for k in range(m): t += C(n-1,k)*q**k*(1-q)**(n-1-k) t *= E0 for k in range(m,n): t += (k+1)*C(n-1,k)*q**k*(1-q)**(n-1-k) E1 = r*t t = 0 for k in range(m+1): t += C(n-1,k)*q**k*(1-q)**(n-1-k) t *= E0 for k in range(m+1,n): t += k*C(n-1,k)*q**k*(1-q)**(n-1-k) return E1 + (1-r)*t
To understand the complexity of the randomized algorithm, you must delve into the implementation details. Suppose $S_1,S_2$ consist of $m$ elements, each bounded by $n$ in absolute value, i.e. the input size is $O(m\log n)$. Let $P_S=\prod\limits_{w\in s}\left(x-w\right)$ be the polynomial corresponding to the set $S$. $P_{s_1},P_{S_2}$ are degree $m$ polynomials. Thus, in order to succeed with constant probability in testing whether $q(x)=P_{S_1}(x)-P_{S_2}(x)$ is the zero polynomial, you need to evaluate it at numbers in the range $[0,cm]$, for some constant $c$. Suppose you choose $y$ uniformly at random from this range, then in order to avoid blowup in the intermediate values while computing $q(y)$, you need to compute the values modulo some $k$ (the standard trick in PIT). Note that the blowup here is not exponential, unlike in PIT where the polynomials are given as algebraic circuits, so you can in fact directly compute $q(y)$. However, this will lead to handling $\Omega(m)$ bit numbers, which will result in quadratic running time, so you can still compute $q(y) \bmod k$ for a random $k$ chosen appropriately to save some time. If $k$ is chosen at random from some range $[1,R]$, then a "good" event is that $k$ does not divide $q(y)$, in that case $q(y)\neq 0$ iff $q(y) \bmod k \neq 0$. To lower bound this probability, note that $k$ is prime with probability $\approx \frac{1}{\log R}$ (here you need some finite version of the prime number theorem), and that $q(y)\le (cm+n)^m$, hence it has at most $m \log (cm+n)$ different prime factors. $\begin{align*}&\Pr\limits\left[\text{$k$ is prime $\land$ $k$ is not a product of $q(y)$}\right]=\Pr\left[\text{$k$ is prime}\right]\cdot\\&\Pr\limits\big[\text{$k$ is a product of $q(y)$} \big| \text{$k$ is prime}\big]\approx \frac{1}{\log R}\left(1-\frac{\log (cm+n)^m}{R/\log R}\right)\end{align*}$ Choosing $R=(m+n)^2$ suffices to make the above probability high enough, and yields constant success probability after $\approx\log R$ evaluations. Each evaluation requires $O(m)$ additions/multiplications of numbers with $\log R$ bits, thus the overall running time is $O(m\log ^2 R\log\log R)$, which is worse than sorting. Perhaps sharper bounds on the success probability will allow you to get rid of one of the logarithmic factors, but taking numeric operations into account will eventually show that you cant beat naive sorting. However, the randomized approach is not without merits, as it allows you to determine multiset equality between two parties with logarithmic communication (you only need to send $y,k$ and the evaluations of $P_{S_1},P_{S_2}$ on $y$ modulo $k$).
Conveners Quark & Lepton Flavor J Michael Williams (Massachusetts Inst. of Technology (US)) Wolfgang Altmannshofer (UC Santa Cruz) Bertrand Echenard (California Institute of Technology (US)) Brian Beckford (University of Michigan) Quark & Lepton Flavor J Michael Williams (Massachusetts Inst. of Technology (US)) Bertrand Echenard (California Institute of Technology (US)) Wolfgang Altmannshofer (UC Santa Cruz) Brian Beckford (University of Michigan) Quark & Lepton Flavor Brian Beckford (University of Michigan) J Michael Williams (Massachusetts Inst. of Technology (US)) Wolfgang Altmannshofer (UC Santa Cruz) Bertrand Echenard (California Institute of Technology (US)) Quark & Lepton Flavor Bertrand Echenard (California Institute of Technology (US)) J Michael Williams (Massachusetts Inst. of Technology (US)) Wolfgang Altmannshofer (UC Santa Cruz) Brian Beckford (University of Michigan) Quark & Lepton Flavor J Michael Williams (Massachusetts Inst. of Technology (US)) Bertrand Echenard (California Institute of Technology (US)) Brian Beckford (University of Michigan) Wolfgang Altmannshofer (UC Santa Cruz) Quark & Lepton Flavor Bertrand Echenard (California Institute of Technology (US)) J Michael Williams (Massachusetts Inst. of Technology (US)) Wolfgang Altmannshofer (UC Santa Cruz) Brian Beckford (University of Michigan) Description parallel sessions I will discuss models with hidden sectors that can be probed with flavor. Searches for the proposed dark sector analogs of the photon and the Higgs boson at the LHCb experiment will be presented. LHCb has world-leading sensitivity to both hypothetical particles in some mass regions. Planned future upgrades and the resulting physics prospects will also be discussed. The KOTO experiment was designed to observe and study the K$^{0}_{L} \rightarrow \pi^{0}\nu\bar{\nu$ decay at J-PARC. The Standard Model (SM) prediction for the process is (3.0$\pm$ 0.3) x 10$^{-11}$ with small uncertainties [1]. This unique \emph{golden} decay is an ideal candidate to probe for new physics and can place strict constraints on beyond the standard model (BSM) theories. The... The KOTO experiment at the J-PARC research facility in Tokai, Japan aims to observe and measure the rare decay of the neutral kaon, $K_L^0 \rightarrow \pi^0 \nu \bar{\nu}$. This decay has a Standard Model (SM) predicted branching ratio (BR) of $(3.00 \pm 0.30) \times 10^{-11}$ [1]. While this decay is extremely rare, it is one of the best decays in the quark sector to probe for new physics... In this talk we show how $\tau$ decays offer an interesting possibility to discriminate between different operators contributing to lepton flavor violation discussed within an effective field theory framework. Recent developments in the determination of the hadronic matrix elements needed to consider semileptonic decays are reviewed. We also discuss the complementarity with other probes such... The tiniest upper limit of any particle's branching ratio was established in 2016 by the MEG experiment on the lepton-flavor-violating muon decay, $\mu \to e \gamma$. To further explore the existence of the decay with an order of magnitude higher sensitivity, the detectors have been upgraded. The new experiment, MEG II, is going to start data-taking in 2020 at the Paul Scherrer Institute in... The primary physics goal of the Mu2e Experiment is to search for Charged Lepton Flavor Violation (CLFV) in the process of a coherent neutrinoless $\mu^{-} N \rightarrow e^{-} N$ transition. This process is allowed under the Standard Model however at unobservable rates. Observation of this process would therefore be an unambiguous indication of new physics. The Mu2e goal is to improve on the... We will investigate an alternative Mu2e-II production scheme based on general knowledge of muon-collider and neutrino-factory front ends, and specific knowledge developed on previous Muons, Inc. SBIR/STTR projects. Bright muon beams generated from sources designed for muon collider and neutrino factory facilities have been shown to generate two orders of magnitude more muons per proton than... The Fermilab Muon g-2 experiment will measure the anomalous magnetic moment of the muon to a precision of 140 parts per billion, which is a factor of four improvement over the previous E821 measurement at Brookhaven. The experiment will also extend the search for the muon’s electric dipole moment (EDM) by approximately two orders of magnitude with a sensitivity down to 10e-21 e.cm. Both of... The Muon $g-2$ experiment at Fermilab is set to provide the most precise measurement of the anomalous magnetic moment of the muon. There is currently a 3+ $\sigma$ tension between the experimental value and Standard Model theory, making this a promising way to look for evidence of beyond the standard model physics. The hadronic vacuum polarization (HVP) contribution to muon $g-2$ is the... We discuss the implications of the recent discovery of CP violation in charm decays at LHCb. Furthermore, we show in which modes to search for charm CP violation next and present U-spin sum rules for CP asymmetries of charmed baryon decays. BESIII has collected data samples corresponding to luminosities of 2.93 fb-1, 3.19 fb-1 and 0.567 fb-1 at center-of-mass energies of 3.773, 4.178, and 4.6 GeV, respectively. The data set collected at 3.773 GeV contains quantum-correlated D0D0bar pairs that provide access to strong-phase differences between amplitudes. We report the measurements of strong phase differences for D0(-bar) -> K_S/L... BESIII has collected data samples corresponding to luminosities of 2.93 fb-1 and 3.19 fb-1 at center-of-mass energies of 3.773 and 4.178, respectively. Based on these, we report measurements of the decays D(s)+ -> l+v (l=mu, tau), D0(+) -> K-bar(pi)l+v (l=e,mu), D0(+) -> K-bar(pi)pie+v, D0(+) -> a0(980)e+v, Ds+ -> eta(')e+v and Ds+ -> K(*)0e+v. From these analyses, precise determinations of... Copious numbers of charmed baryons are produced by the LHC. These are detected in various decay modes using the LHCb detector. We report on some unique decays that give incite into weak decay mechanisms. We also contrast with beauty baryon decays in some instances. Decays of $b$ hadrons provide a powerful probe for new physics effects that may violate the Standard Model's paradigm of Lepton Flavor Universality, whereby the three charged lepton flavors are distinguished only by their differing masses. A definitive observation of a deviation from the predictions of LFU would provide unambiguous evidence of new physics. A recent history of results showing... We present a state-of-the-art picture of the imprints of New Physics in $b \to s \ell^{+} \ell^{−}$ transitions in light of the most recent experimental updates on lepton-universality tests of the Standard Model in this channel from the LHCb and Belle collaborations. We make use of the language of effective field theories in order to characterize a model-independent study of New Physics... The rare inclusive decay $\bar{B}\rightarrow X_s\gamma$ is an important probe of physics beyond the standard model. The largest uncertainty on the decay rate and CP asymmetry comes from resolved photon contributions. They first appear at order $1/m_b$ in the heavy quark expansion and arise from operators other than $Q_{7\gamma}$. One of the three leading contributions in the heavy quark... This talk will report recent results from ongoing lattice-QCD calculations of semileptonic decays from the Fermilab Lattice and MILC collaborations. The focus of the talk will be on heavy-to-light decays (D/B to K/$\pi$). These calculations play an essential role in determining CKM matrix elements (in particular, $|V_{us}|$, $|V_{cd}|$, and $|V_{ub}|$). They are also important for constraining... The beauty-quark is one of the kinematically accessible heavy-quark at HERA. Measurements of open $b$-quark production in deep inelastic scattering (DIS) of ${e^\pm}p$ at HERA provide important test of perturbative Quantum Chromo Dynamics (pQCD) theory within the Standard Model and is used to constrain proton parton distribution functions (PDFs). In this contribution we attempt to determine... Semileptonic $b$-hadron decays provide a laboratory to measure the CKM matrix element $|Vcb|$, as well as test lepton flavor universality violation (LFUV) via the $R(D^{(*)})$ ratios. Measurements of the former exhibit low values that are in tension with inclusive $|V_{cb}|$ measurements, while persistent LFUV signals are observed above the 3 sigma level. This talk provides a survey of recent... B physics is in an exciting era. The LHCb and Belle II experiments are reaching unprecedented precision and are providing new opportunities for discovering physics beyond the Standard Model. To connect the hadronic processes observed in the experiments to the underlying short-distance physics, lattice QCD calculations are essential. I will discuss recent progress and future prospects for B... The Belle II experiment has begun its main physics running with a fully instrumented detector in Tsukuba, Japan. With the SuperKEKB asymmetric-energy e$^+$e$^-$ collider producing collisions with an ultimate design luminosity of 8 $\times 10^{35}\,$cm$^{-2}\,$s$^{-1}$ and a planned $50\,$ab$^{-1}$ data set, the Belle II/SuperKEKB facility is poised to become the world's first Super B Factory.... Since 2010, the LHCb experiment at CERN has been accumulating 1-2 $fb^{-1}$ of 7-13 TeV pp collision data every year. This b- and c-hadron rich data sample, together with the detector’s excellent performance, has allowed LHCb to carry out world leading measurements in the field of flavor physics. Many of these results, however, will benefit from significantly larger data samples, and that is... A key part of the LHCb charged particle tracking system is the silicon detector (UT) placed after the VErtex LOcator and before the dipole magnet. Its main function is to make a quick measurement on the momentum of tracks using the small magnetic field between the VELO and the UT. The fully software trigger is consequently sped up by a factor of three. Also of prime importance is the rejection... The recent discovery of several narrow pentaquark states by the LHCb experiment will be presented. Doubly heavy baryons $\left(QQq\right)$ and singly heavy antimesons $\left(\bar{Q}q\right)$ are related by the heavy quark-diquark (HQDQ) symmetry because in the $m_Q \to \infty$ limit, the light degrees of freedom in both the hadrons are expected to be in identical configurations. Hyperfine splittings of the ground states in both systems are nonvanishing at $O(1/m_Q)$ in the heavy quark mass... The decays of the /\b baryon constitute important sources of information for different aspects of weak interactions. /\b baryons are produced prolifically at the LHC, with their production ratio with respect to light B mesons decreasing rapidly with transverse momentum. These /\b's have been analyzed by the LHCb collaboration in order to measure form-factors in /\b->/\c mu- nu transitions,... We revisit the non-relativistic effective field theory called XEFT that is specifically designed for the description of X(3872) which is one of the most interesting candidates for hadronic molecules. In the framework XEFT, X(3872) is described as a bound state of two D mesons. Two new interaction terms consistent with general power counting rules are introduced to study the interaction of... We selected candidate events for production of the exotic charged charmonium-like states $Z_c^{\pm}(3900)$ decaying to $ J/\psi\pi^{\pm}$ and $X(3872)$ decaying to $J/\psi\pi^{\pm}\pi^{\mp}$. We use 10.4 $\rm fb^{−1}$ of $p\bar p$ collisions recorded by the D0 experiment at the Tevatron collider at $\sqrt s=$1.96 TeV. We measure the $Z_c$ mass and natural width using subsample of... High-energy heavy ion collisions result in a deconfined phase transition where instead of ordinary nuclear matter in form protons and neutrons one can study the strongly coupled quark-gluon plasma (QGP). In peripheral heavy ion collisions, the presence of the strong magnetic fields and the chiral anomaly is predicted to induce an electric current which induces a charge separation along the...
On a well-posed turbulence model 1. University of Pittsburgh, Department of Mathematics, Pittsburgh, PA 15260, United States 2. Université Rennes 1, IRMAR, UMR CNRS 6625, F-35000 Rennes, France $\overline{u u} $˜ $\overline{\bar {u} \bar {u}}$, yielding the model $\nabla \cdot w= 0, \quad w_{t} + \nabla \cdot (\overline{w w}) - \nu \Delta w + \nabla q = \bar {f}$. In particular, we prove existence and uniqueness of strong solutions, develop the regularity of solutions of the model and give a rigorous bound on the modelling error, $||\bar {u} - w||$. Finally, we consider the question of non-physical vortices (false eddies), proving that the model correctly predicts that only a small amount of vorticity results when the total turning forces on the flow are small. Mathematics Subject Classification:Primary: 76F65; Secondary: 35Q3. Citation:W. Layton, R. Lewandowski. On a well-posed turbulence model. Discrete & Continuous Dynamical Systems - B, 2006, 6 (1) : 111-128. doi: 10.3934/dcdsb.2006.6.111 [1] Marcel Lesieur. Two-point closure based large-eddy simulations in turbulence, Part 1: Isotropic turbulence. [2] Marcel Lesieur. Two-point closure based large-eddy simulations in turbulence. Part 2: Inhomogeneous cases. [3] [4] [5] [6] [7] [8] Cristina Anton, Jian Deng, Yau Shu Wong, Yile Zhang, Weiping Zhang, Stephan Gabos, Dorothy Yu Huang, Can Jin. Modeling and simulation for toxicity assessment. [9] [10] [11] Alexey Cheskidov, Susan Friedlander, Nataša Pavlović. An inviscid dyadic model of turbulence: The global attractor. [12] [13] [14] [15] [16] [17] [18] [19] Petr Bauer, Michal Beneš, Radek Fučík, Hung Hoang Dieu, Vladimír Klement, Radek Máca, Jan Mach, Tomáš Oberhuber, Pavel Strachota, Vítězslav Žabka, Vladimír Havlena. Numerical simulation of flow in fluidized beds. [20] 2018 Impact Factor: 1.008 Tools Metrics Other articles by authors [Back to Top]
Hi, We consider subspaces of $\mathbb{R}^N$. Suppose that we have a property called $\mbox{Prop}$ that apply to subspaces of $\mathbb{R}^N$. That is to say a function from the set of subspaces of $\mathbb{R}^N$ into $\{0,1\}$. The variables $X_1,\dots,X_N$ are gaussian taking their values in $\mathbb{R}^N$. They are supposed i.i.d. Deriving from an isotropic distribution (The covariance matrix is identity). Prove that ($0\leqslant n\leqslant N$) $$ P(\mbox{Prop}(Span(X_1,\dots,X_n))=1)= P(\mbox{Prop}(Span(X_1,\dots,X_{N-n})^\bot)=1) $$ Without referring to the definition of $\mbox{Prop}$ other than the obvious that is $$ \mbox{Prop}(Span(X_1,\dots,X_k)) $$ is a measurable function on $\Omega$, the sample set. In other words, the probability for some property to hold on a subspace spanned by $n$ variables chosen at random (gaussian isotropic) is equal to the probability for it to hold on the orthogonal space of the space spanned by $N-n$ variables. This may translate to this "funny" question: $X_1,\dots,X_6$ are six real-valued gaussian iid variables. Prove that $$ P\left(\sin\left(\frac{|X_1|}{|X_1|+|X_2|+|X_3|}\right)>0.2\right)= P\left(\sin\left(\frac{|Y_1|}{|Y_1|+|Y_2|+|Y_3|}\right)>0.2\right) $$ where $Y_1=X_2X_6-X_3X_5$, $Y_2=-X_1X_6+X_3X_4$ and $Y_3=X_1X_5-X_2X_4$. (The vector $Y$ (in $\mathbb{R}^3$) is orthogonal to the subspace spanned by $(X_1,X_2,X_3)$ and $(X_4,X_5,X_6)$). Thank you for your help, Saïd.
Search Now showing items 1-5 of 5 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
Search Now showing items 1-10 of 32 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV (Elsevier, 2018-05) We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2018-01) We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ... Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV (Springer Berlin Heidelberg, 2018-07-16) Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
Consider the rational function \[f(x)=\dfrac{x^2−6x−7}{x−7} \nonumber \] The function can be factored as follows: \[f(x)=\dfrac{\cancel{(x−7)}(x+1)}{\cancel{x−7}} \nonumber \] which gives us \[f(x)=x+1,x≠7. \nonumber \] Does this mean the function \(f(x)\) is the same as the function \(g(x)=x+1?\) The answer is no. Function \(f(x)\) does not have \(x=7\) in its domain, but \(g(x)\) does. Graphically, we observe there is a hole in the graph of \(f(x)\) at \(x=7\), as shown in Figure and no such hole in the graph of \(g(x)\), as shown in Figure. (left) The graph of function \(f\) contains a break at \(x=7\) and is therefore not continuous at \(x=7\). (Right)The graph of function \(g\) is continuous. So, do these two different functions also have different limits as \(x\) approaches 7? Not necessarily. Remember, in determining a limit of a function as \(x\) approaches \(a\), what matters is whether the output approaches a real number as we get close to \(x=a\). The existence of a limit does not depend on what happens when \(x\) equals \(a\). Look again at Figure and Figure. Notice that in both graphs, as \(x\) approaches 7, the output values approach 8. This means \[ \lim \limits_{x \to 7} f(x)= \lim \limits_{x \to 7} g(x). \nonumber \] Remember that when determining a limit, the concern is what occurs near \(x=a\), not at \(x=a\). In this section, we will use a variety of methods, such as rewriting functions by factoring, to evaluate the limit. These methods will give us formal verification for what we formerly accomplished by intuition. Finding the Limit of a Sum, a Difference, and a Product Graphing a function or exploring a table of values to determine a limit can be cumbersome and time-consuming. When possible, it is more efficient to use the properties of limits, which is a collection of theorems for finding limits. Knowing the properties of limits allows us to compute limits directly. We can add, subtract, multiply, and divide the limits of functions as if we were performing the operations on the functions themselves to find the limit of the result. Similarly, we can find the limit of a function raised to a power by raising the limit to that power. We can also find the limit of the root of a function by taking the root of the limit. Using these operations on limits, we can find the limits of more complex functions by finding the limits of their simpler component functions. properties of limits Let \(a, k, A,\) and \(B\) represent real numbers, and \(f\) and \(g\) be functions, such that \(\lim \limits_{x \to a} f(x)=A\) and \( \lim \limits_{x \to a}g(x)=B.\) For limits that exist and are finite, the properties of limits are summarized in Table Constant, k \(\lim \limits_{x \to a} k=k \) Constant times a function \(\lim \limits_{x \to a} [k⋅f(x)]=k \lim \limits_{x \to a} f(x)=kA\) Sum of functions \(\lim \limits_{x \to a} [f(x)+g(x)]= \lim \limits_{x \to a}f(x)+ \lim \limits_{x to a} g(x)=A+B\) Difference of functions \(\lim \limits_{x \to a} [f(x)−g(x)]= \lim \limits_{x \to a} f(x)− \lim \limits_{x \to a} g(x)=A−B\) Product of functions \( \lim \limits _{x \to a}[f(x)⋅g(x)]= \lim \limits _{x \to a}f(x)⋅ \lim \limits_{x \to a} g(x)=A⋅B\) Quotient of functions \(\lim \limits _{x \to a} \frac{f(x)}{g(x)}= \frac{\lim \limits _{x \to a}f(x) }{\lim \limits _{x \to a}g(x)}=\frac{A}{B},B≠0\) Function raised to an exponent \(\lim \limits _{x \to a}[f(x)]^n=[\lim \limits _{x \to ∞}f(x)]^n=A^n\), where \(n\) is a positive integer nth root of a function, where n is a positive integer \(\lim \limits _{x \to a}f(x) \sqrt[n]{f(x)} = \sqrt[n]{ \lim \limits _{x \to a}[ f(x) ]}=\sqrt[n]{A}\) Polynomial function \( \lim \limits _{x \to a} p(x)=p(a)\) Example \(\PageIndex{1}\): Evaluating the Limit of a Function Algebraically Evaluate \[\lim \limits _{x \to 3}(2x+5). \nonumber \] Solution \[\begin{align} \lim \limits _{x \to 3}(2x+5) &= \lim \limits _{x \to 3} (2x)+\lim \limits _{x \to 3}(5) && \text{Sum of functions property} \\ &=2 \lim \limits_{ x \to 3}(x)+\lim \limits _{x \to 3}(5) && \text{Constant times a function property} \\ &=2(3)+5 && \text{Evaluate} \\ &=11 \end{align} \nonumber \] Exercise \(\PageIndex{1}\): Evaluate the following limit: \[\lim \limits_{x \to −12}(−2x+2). \nonumber \] Solution 26 Finding the Limit of a Polynomial Not all functions or their limits involve simple addition, subtraction, or multiplication. Some may include polynomials. Recall that a polynomial is an expression consisting of the sum of two or more terms, each of which consists of a constant and a variable raised to a nonnegative integral power. To find the limit of a polynomial function, we can find the limits of the individual terms of the function, and then add them together. Also, the limit of a polynomial function as \(x\) approaches \(a\) is equivalent to simply evaluating the function for \(a\). how to: Given a function containing a polynomial, find its limit Use the properties of limits to break up the polynomial into individual terms. Find the limits of the individual terms. Add the limits together. Alternatively, evaluate the function for \(a\). Example \(\PageIndex{1}\): Evaluating the Limit of a Function Algebraically Evaluate \[ \lim \limits_{x \to 3}(5x ^2). \nonumber \] Solution \[\begin{align} \lim \limits_{x \to 3}(5x^2) &= 5 \lim \limits_{x \to 3}(x^2) && \text{Constant times a function property} \\ &=5(3^2) && \text{Function raised to an exponent property} \\&=45 \end{align} \nonumber \] Exercise \(\PageIndex{1}\): Evaluate \[ \lim \limits_{x \to 4} (x^3−5). \nonumber \] Solution 59 Example \(\PageIndex{2}\): Evaluating the Limit of a Polynomial Algebraically Evaluate \[ \lim \limits_{x \to 5} (2x^3−3x+1). \nonumber \] Solution \[\begin{align} \lim \limits_{x \to 5}(2x^3−3x+1) &= \lim \limits_{x \to 5}(2x3)−\lim \limits_{x \to 5}(3x)+\lim \limits_{x \to 5} (1) && \text{Sum of functions}\\ &= 2 \lim \limits_{x \to 5}(x^3)−3 \lim \limits_{x \to 5}(x)+\lim \limits_{x \to 5}(1) && \text{Constant times a function} \\ &=2(5^3)−3(5)+1 && \text{Function raised to an exponent} \\ &=236 &&\text{Evaluate} \end{align} \nonumber \] Exercise \(\PageIndex{2}\): Evaluate the following limit: \[\lim \limits_{x \to −1}(x^4−4x^3+5). \nonumber \] Solution 10 Finding the Limit of a Power or a Root When a limit includes a power or a root, we need another property to help us evaluate it. The square of the limit of a function equals the limit of the square of the function; the same goes for higher powers. Likewise, the square root of the limit of a function equals the limit of the square root of the function; the same holds true for higher roots. Example \(\PageIndex{3}\): Evaluating a Limit of a Power Evaluate \[ \lim \limits_{x \to 2}(3x+1)^5. \nonumber \] Solution We will take the limit of the function as \(x\) approaches 2 and raise the result to the 5 th power. \[\begin{align} \lim \limits_{x \to 2} (3x+1)^5 &= (\lim \limits_{x \to 2}(3x+1))^5 \\ &=(3(2)+1)^5 \\ &=7^5 \\ &=16,807 \end{align} \nonumber \] Exercise \(\PageIndex{3}\): Evaluate the following limit: \( \lim \limits_{x \to −4}(10x+36)^3.\) Solution −64 Q & A: If we can’t directly apply the properties of a limit, for example in \(\lim \limits_{x \to 2}(\frac{x^2+6x+8}{x−2})\), can we still determine the limit of the function as \(x\) approaches \(a\)? Yes. Some functions may be algebraically rearranged so that one can evaluate the limit of a simplified equivalent form of the function. Finding the Limit of a Quotient Finding the limit of a function expressed as a quotient can be more complicated. We often need to rewrite the function algebraically before applying the properties of a limit. If the denominator evaluates to 0 when we apply the properties of a limit directly, we must rewrite the quotient in a different form. One approach is to write the quotient in factored form and simplify. Example \(\PageIndex{4}\): Evaluating the Limit of a Quotient by Factoring Evaluate \[\lim \limits_{x \to 2} (\frac{x^2−6x+8}{x−2}). \nonumber \] Solution Factor where possible, and simplify. \[\begin{align} \lim \limits_{x \to 2} (\dfrac{x^2−6x+8}{x−2}) &= \lim \limits_{x \to 2}(\dfrac{(x−2)(x−4)}{x−2}) && \text{Factor the numerator.} \\ & = \lim \limits_{x \to 2}(\dfrac{\cancel{(x−2)}(x−4)}{\cancel{x−2}}) && \text{Cancel the common factors.} \\ &= \lim \limits_{x \to 2}(x−4) && \text{Evaluate.} \\ & =2−4=−2 \end{align} \nonumber \] Analysis When the limit of a rational function cannot be evaluated directly, factored forms of the numerator and denominator may simplify to a result that can be evaluated. Notice, the function \[f(x)=\dfrac{x^2−6x+8}{x−2} \nonumber \] is equivalent to the function \[f(x)=x−4,x≠2. \nonumber \] Notice that the limit exists even though the function is not defined at \(x = 2\). Exercise \(\PageIndex{4}\) Evaluate the following limit: \[\lim \limits_{x \to 7} \left( \dfrac{x^2−11x+28}{7−x} \right) . \nonumber \] Solution \(−3\) Example \(\PageIndex{5}\): Evaluating the Limit of a Quotient by Finding the LCD Evaluate \[\lim \limits_{x \to 5} \left( \dfrac{\frac{1}{x}−\frac{1}{5}}{x−5} \right) . \nonumber \] Solution Find the LCD for the denominators of the two terms in the numerator, and convert both fractions to have the LCD as their denominator. Analysis When determining the limit of a rational function that has terms added or subtracted in either the numerator or denominator, the first step is to find the common denominator of the added or subtracted terms; then, convert both terms to have that denominator, or simplify the rational function by multiplying numerator and denominator by the least common denominator. Then check to see if the resulting numerator and denominator have any common factors. Exercise \(\PageIndex{5}\): Evaluate \[\lim \limits_{x \to −5} \left( \dfrac{\frac{1}{5}+\frac{1}{x}}{10+2x} \right). \nonumber \] Solution \(−\frac{1}{50}\) how to: Given a limit of a function containing a root, use a conjugate to evaluate If the quotient as given is not in indeterminate \((\frac{0}{0})\) form, evaluate directly. Otherwise, rewrite the sum (or difference) of two quotients as a single quotient, using the least common denominator (LCD). If the numerator includes a root, rationalize the numerator; multiply the numerator and denominator by the conjugateof the numerator. Recall that \(a±\sqrt{b}\) are conjugates. Simplify. Evaluate the resulting limit. Example \(\PageIndex{6}\): Evaluating a Limit Containing a Root Using a Conjugate Evaluate \[ \lim \limits_{x \to 0} \left( \dfrac{\sqrt{25−x} −5}{x} \right) . \nonumber \] Solution \[\begin{align} \lim \limits_{x \to 0} \left( \dfrac{\sqrt{25−x}−5}{x} \right) &= \lim \limits_{x \to 0} \left( \dfrac{(\sqrt{25−x}−5)}{x}⋅\frac{(\sqrt{25−x}+5)}{(\sqrt{25−x}+5)} \right) && \text{Multiply numerator and denominator by the conjugate.} \\ &= \lim \limits_{x \to 0} \left( \dfrac{(25−x)−25}{x(\sqrt{25−x}+5)} \right) && \text{Multiply: } (\sqrt{25−x} −5)⋅(\sqrt{25−x}+5)=(25−x)−25. \\ & = \lim \limits_{x \to 0} \left( \dfrac{−\cancel{x}}{\cancel{x}(25−x+5)} \right) && \text{Combine like terms.} \\ & =\lim \limits_{x \to 0} \left( \dfrac{−\cancel{x}}{\cancel{x}(\sqrt{25−x}+5)} \right) && \text{Simplify }\dfrac{−x}{x}=−1. \\ & =\dfrac{−1}{\sqrt{25−0}+5} && \text{Evaluate.} \\ & =\dfrac{−1}{5+5}=−\dfrac{1}{10} \end{align} \nonumber \] Analysis When determining a limit of a function with a root as one of two terms where we cannot evaluate directly, think about multiplying the numerator and denominator by the conjugate of the terms. Exercise \(\PageIndex{6}\) Evaluate the following limit: \(\lim \limits_{h \to 0} \left( \dfrac{\sqrt{16−h}−4}{h} \right) \). Solution \(−\frac{1}{8}\) Example \(\PageIndex{7}\): Evaluating the Limit of a Quotient of a Function by Factoring Evaluate \[\lim \limits_{x \to 4} \left( \frac{4−x}{\sqrt{x−2}} \right). \nonumber \] Solution \[\begin{align} \lim \limits_{x \to 4} (\dfrac{4−x}{\sqrt{x}−2}) & = \lim \limits_{x \to 4} (\dfrac{(2+\sqrt{x})(2−x)}{\sqrt{x}−2}) && \text{Factor.} \\ &= \lim \limits_{x \to 4} ( \dfrac{(2+\sqrt{x})(\cancel{2−\sqrt{x}})}{−\cancel{(2−\sqrt{x})}}) && \text{Factor −1 out of the denominator. Simplify.} \\ & = \lim \limits_{x \to 4}−(2+x) && \text{Evaluate.} \\ &=−(2+ \sqrt{4}) \\ &=−4 \end{align} \nonumber \] Analysis Multiplying by a conjugate would expand the numerator; look instead for factors in the numerator. Four is a perfect square so that the numerator is in the form \[a^2−b^2 \nonumber \] and may be factored as \[(a+b)(a−b). \nonumber \] Exercise \(\PageIndex{7}\) Evaluate the following limit: \[\lim \limits_{x \to 3} \left( \frac{x−3}{\sqrt{x}−\sqrt{3} }\right). \nonumber \] Solution \(2\sqrt{3}\) how to: Given a quotient with absolute values, evaluate its limit Try factoring or finding the LCD. If the limitcannot be found, choose several values close to and on either side of the input where the function is undefined. Use the numeric evidence to estimate the limits on both sides. Example \(\PageIndex{8}\): Evaluating the Limit of a Quotient with Absolute Values Evaluate \[\lim \limits_{x \to 7} \frac{|x−7|}{x−7}. \nonumber \] Solution The function is undefined at \(x=7\), so we will try values close to 7 from the left and the right. Left-hand limit: \[\frac{|6.9−7|}{6.9−7}=\frac{|6.99−7|}{6.99−7}=\frac{|6.999−7|}{6.999−7}=−1 \nonumber \] Right-hand limit: \[\frac{|7.1−7|}{7.1−7}=\frac{|7.01−7|}{7.01−7}=\frac{|7.001−7|}{7.001−7}=1 \nonumber \] Since the left- and right-hand limits are not equal, there is no limit. Exercise \(\PageIndex{8}\) Evaluate \[ \lim \limits_{x \to 6^+} \frac{6−x}{| x−6 |}. \nonumber \] Solution Key Concepts The properties of limits can be used to perform operations on the limits of functions rather than the functions themselves. See Example. The limit of a polynomial function can be found by finding the sum of the limits of the individual terms. See Example and Example. The limit of a function that has been raised to a power equals the same power of the limit of the function. Another method is direct substitution. See Example. The limit of the root of a function equals the corresponding root of the limit of the function. One way to find the limit of a function expressed as a quotient is to write the quotient in factored form and simplify. See Example. Another method of finding the limit of a complex fraction is to find the LCD. See Example. A limit containing a function containing a root may be evaluated using a conjugate. See Example. The limits of some functions expressed as quotients can be found by factoring. See Example. One way to evaluate the limit of a quotient containing absolute values is by using numeric evidence. Setting it up piecewise can also be useful. See Example. Glossary properties of limits a collection of theorems for finding limits of functions by performing mathematical operations on the limits
I’ve come to realize that I’m always tempted to start my posts with “Recently, I’ve…” or “So and so gave me such and such a problem…” or “I happened across this on…” It is as if my middle school English teachers (all of whom were excellent) succeeded so well in forcing me to transition from one idea to the next that I can’t help it even today. But, my respect for my middle school teachers aside, I think I’m going to try to avoid that here, and just sort of jump in. Firstly, as announced at Terry Tao’s Blog, two new polymath items are on the horizon. There is a new polymath proposal at the polymath blog on the “Hot Spots Conjecture”, proposed by Chris Evans, and that has already expanded beyond the proposal post into its first research discussion post. (To prevent clutter and to maintain a certain level or organization, the discussion gets cut up into 100-comment size chunks or so, and someone summarizes some of the key points in the header each time. I think it’s a brilliant model). And the mini-polymath organized around the IMO will happen at the wiki starting on July 12. Now, onto some number theory – One of the few complaints I have about my undergraduate education at Georgia Tech was how often a core group of concepts came up. In perhaps ten of my classes, I learned about the Euclidean algorithm (I learned about equivalence relations in even more).. The idea is so old-hat to me now that I really like it when I see it some up in places I don’t immediately expect. The Problem Show that $latex \gcd (a^m – 1,a^n – 1) = a^{\gcd (m,n)} – 1$ Let’s look at a couple of solutions. One might see that the Euclidean algorithm works on the exponents simply. In fact, $latex \gcd (a, a^m – 1) = 1 \forall m$, and so we have that (assuming $latex n > m$ wlog) $latex \gcd (a^n-1,a^m-1) = \gcd (a^n – 1, a^n – a^{n-m} ) = \gcd (a^{m-n} -1, a^m – 1)$ So one could continue to subtract one exponent from the other, and then switch which exponent we’re reducing, and so on, literally performing the Euclidean algorithm on the exponents. But there’s a pleasant way of visualizing this. As $latex (a-1)|(a^n-1),(a^m-1),(a^{\gcd(m,n)} – 1)$, we can look instead at $latex \gcd \left(\dfrac{a^n – 1}{a-1}, \dfrac{a^m-1}{a-1}\right)$. To do a stricter example, we might look at $latex \gcd \left(\dfrac{a^5 – 1}{a-1}, \dfrac{a^2-1}{a-1}\right)$, or $latex \gcd (1 + a + a^2 + a^3 + a^4, 1 + a)$. The first, in this case, has $latex 5$ terms, and the second has $latex 2$ terms, the same as the original exponents. By multiplying $latex 1 + a$ by $latex a^3$ and subtracting from $latex 1 + a + a^2 + a^3 + a^4$ leaves $latex 1 + a + a^2$. In particular, it is very clear that we can remove $latex m$ terms at a time from the $latex n$ terms, and that this can be rotated. I really like this type of answer for a few reasons: it was not immediately obvious to me that the Euclidean algorithm would play much a role, and this argument is indepent of $latex a$ being a number (i.e. it works in rings of polynomials). $latex \diamondsuit$ Another, essentially different way of solving this problem is to show that all common divisors of $latex a^m – 1$ and $latex a^n – 1$ are divisors of $latex a^{\gcd(m,n)} – 1$. Suppose $latex d|(a^m – 1), (a^n – 1)$. Then $latex a^m \equiv a^n \equiv 1 \mod d$, so that in particular $latex \text{order}(d)|m,n$. But then $latex \text{order}(d)|\gcd(m,n)$, so in particular $latex a^{\gcd(m,n)} \equiv 1 \mod d$. This argument was a series of iff statements, so that any divisor of $latex a^{\gcd(m,n)} – 1$ is a common divisor of $latex a^m – 1$ and $latex a^n – 1$ as well. Thus we have that $\gcd(a^m – 1, a^n – 1) = a^{\gcd(m,n)} – 1$, as desired \latex diamondsuit$ Is it forgiveable that Georgia Tech taught me the Euclidean algorithm in so many of my classes? Although I complain, there was reason. There is a healthy lack of duplication of classes between the different schools and colleges. So programmers might take combinatorics, engineers might take prob/stat, anyone might take intro to elementary number theory or, if they were daring, abstract algebra, and mathies themselves would learn about it in the closest thing Tech has to an intro-to-proofs class, the dedicated linear algebra course (called abstract vector spaces). All of these teach the Euclidean algorithm (and most teach combinations/permutations and equivalence relations, too), but there was a general sense that classes were self-contained. Thus it was easy to take classes out-of-major. Brown graduate mathematics does not have this self-containment. I understand this, and I doubt that any graduate math school would. Why reinvent the wheel? But it was one of the few times when I transitioned to a new school and actually had a different learning experience (maybe the only). This removes me from a seemingly key component of Brown undergraduate life – the open curriculum, also designed to allow students to take classes out-of-concentration. So when I’m asked to comment on Brown undergraduate life or the undergraduate math program (and I have been asked), I really don’t have anything to say. It makes me feel suddenly older, yet not any wiser. Go figure. Digression aside, I wanted to talk about progress on two conjectures. Firstly, the Goldbach conjecture. The Goldbach conjecture states that Every even integer greater than $latex 2$ can be expressed as the sum of two primes. The so called ‘Ternary Goldbach conjecture’ (sometimes called the weak Goldbach conjecture‘ states that Every odd number greater than $latex 7$ can be expressed as the sum of three primes. It is known that every odd number greater than $latex 1$ is the sum of at most five primes (link to arxiv, Terry Tao’s paper). On 23 May, Harald Helfgott posted a paper on the arxiv that makes a lot of progress towards the Ternary Goldbach. In particular, his abstract states: The ternary Goldbach conjecture states that every odd number $latex n\geq 7$ is the sum of three primes. The estimation of sums of the form $latex \sum_{p\leq x} e(\alpha p)$, $latex \alpha = a/q + O(1/q^2)$, has been a central part of the main approach to the conjecture since (Vinogradov, 1937). Previous work required $latex q$ or $latex x$ to be too large to make a proof of the conjecture for all $n$ feasible. The present paper gives new bounds on minor arcs and the tails of major arcs. For $latex q\geq 4\cdot 10^6$, these bounds are of the strength needed to solve the ternary Goldbach conjecture. Only the range $latex q\in \lbrack 10^5, 4\cdot 10^6\rbrack$ remains to be checked, possibly by brute force, before the conjecture is proven for all $latex n$. The new bounds are due to several qualitative improvements. In particular, this paper presents a general method for reducing the cost of Vaughan’s identity, as well as a way to exploit the tails of minor arcs in the context of the large sieve. Pretty slick. Finally, and this is complete hearsay, it is rumored that the ABC conjecture might have been solved. I read of potential progress by S. Mochizuki over at the Secret Blogging Seminar. To be honest, I don’t really know much about the conjecture. But as they said at the Secret Blogging Seminar: “My understanding is that blogs are for such things.” At least sometimes.
If a poster includes in the title some formula in enclosed between $$, like $$x^2+y^2=z^2$$$$x^2+y^2=z^2$$the title of the question will take up more space in various lists of questions, like here or here. I guess that we can agree that this is not a good way to use MathJax (LaTeX) in titles. A user posting such question could have done it by mistake. Or even on purpose, not knowing that this is not a good way to write title. At Mathematics Stack Exchange the the string $$ is included among things which are disallowed/blacklisted in the title, see: Using block (displayed) equations in question titles Would blacklisting this on MathOverflow be useful, too? (As far as I know, moderators can request from the SE team changes in blacklisted tags, blacklisted words, etc.) This is probably not a huge problem. I noticed it on one post made today. (But the title was edited during grace period. So I could not show this particular instance, not even by linking to the revision history.) I do not vouch for my SQL skills, but using this query this query I only found one question with such title: Rewriting a series $\sum_{n=0}^\infty \frac{1}{n!}(\Delta^\varepsilon)^n a_n$ in the form $\sum_{n=0}^\infty c_n \varepsilon^n$ (Which, I assume, will be edited soon after being mentioned on meta, so I will also add link to the revision history.) Other questions found by the query used constructs such as $a$$+$$b$ for $a$$+$$b$. (See the detailed answer by arjafi, which even includes the posts which has such title anywhere in the revision history.)
I have given a selfdefined command like \newcommand{\AND}[2]{\left(#1 \vee #2 \right)} which makes sure that I always give the correct number of parameters to my formula and which handles the parentheses as well. Now, when I use this command in a align environment and the arguments become very long, then a linebreak becomes nessecary. Here's the problem: In the align environment I have to set linebreaks myself, however between \left and \right linebreaks aren't allowed. So, is there any way to have a automatic linebreak here, which also keeps the correct size of the parentheses? My only workaround so far is not to use the macro in these situations, which results in unclean code. The breqn package doesn't seem to fix this problem. \documentclass[11pt, draft]{scrbook}\usepackage{amsmath}\usepackage{breqn}\newcommand{\AND}[2]{\ensuremath{\left(#1\vee#2\right)}}\begin{document}\begin{align*}aaa&= \AND{\sum^a_b \AND{\AND{\AND{\AND{\AND{\AND{\AND{\AND{\AND{\AND{a}{b}}{b}}{b}}{b}}{\AND{\AND{a}{\AND{a}{b}}}{\AND{a}{\AND{a}{b}}}}}{b}}{b}}{a}}{\AND{a}{\AND{\AND{a}{b}}{b}}}}{\frac{a}{\frac{a}{b}}}}{\sum^a_b \AND{c}{\frac{a}{\frac{a}{b}}}}\\&= b\end{align*}\end{document} EDIT: After fiddling around for a while, I came up with a small workaround that solves a part of the problem, but not all. I defined a new command which inserts a linebreak and handles the parenthesis for this linebreak: \documentclass[11pt, draft]{scrbook}\usepackage{amsmath}\usepackage{mathtools}\newcommand{\AND}[2]{\ensuremath{\left(#1\vee#2\right)}}\newcommand{\ANDbr}[2]{\ensuremath{%\begin{lgathered}[t]%\left(#1 \vee \vphantom{#2} \right. \\\left.\vphantom{#1\vee}#2\right)\end{lgathered}%}}\begin{document}\begin{align*}aaa&= \ANDbr{\AND{\AND{\AND{a}{b}}{\AND{\frac{a}{b}}{c}}}{\AND{\AND{\sum_a^b a}{p}}{\AND{a}{b}}}}{\AND{\AND{\AND{a}{b}}{\AND{a}{b}}}{\AND{\AND{a}{b}}{\AND{a}{b}}}}\\&= b\end{align*}\end{document} This seems to work as long as I need only one linebreak. Inserting \ANDbr for a second time messes things up however. EDIT2:I tried to add \allowbreak into the definition of \AND, but it didn't change anything.
Assume that $\Gamma$ is a group with neutral element $e$. We associate to $\Gamma$ the following groupoid $G$: $G=\Gamma \times \Gamma,\;\;\;G^{(0)}=\Gamma \times \{0\},\;\;s(a,b)=(a,e),\;\;\; r(a,b)=(ba, e)$ If $\phi:\Gamma_{1}\to \Gamma_{2}$ is a group isomorphism, then $\tilde{\phi}:G_{1} \to G_{2}$ with $\tilde{\phi}(a,b)=(\phi(a), \phi(b))$ is a groupoid isomorphism. So isomorphism groups give us isomorphic groupoids. Now we ask the converse: Are there two non isomorphic groups $\Gamma_{1}, \Gamma_{2}$ such that the corresponding groupoids $G_{1}, G_{2}$ are isomorphic. Note that a groupoid isomorphism between $G_{1}, G_{2}$ does not necessarily come from a group isomorphism between $\Gamma_{1}, \Gamma_{2}$, as constructed above. An easy example can be provided by $\Gamma_{1}=\Gamma_{2}=\mathbb{Z}/2\mathbb{Z}$. This situation is a motivation for the above question.
That was an excellent post and qualifies as a treasure to be found on this site! wtf wrote: When infinities arise in physics equations, it doesn't mean there's a physical infinity. It means that our physics has broken down. Our equations don't apply. I totally get that . In fact even our friend Max gets that.http://blogs.discovermagazine.com/crux/ ... g-physics/ Thanks for the link and I would have showcased it all on its own had I seen it first The point I am making is something different. I am pointing out that: All of our modern theories of physics rely ultimately on highly abstract infinitary mathematics That doesn't mean that they necessarily do; only that so far, that's how the history has worked out. I see what you mean, but as Max pointed out when describing air as seeming continuous while actually being discrete, it's easier to model a continuum than a bazillion molecules, each with functional probabilistic movements of their own. Essentially, it's taking an average and it turns out that it's pretty accurate. But what I was saying previously is that we work with the presumed ramifications of infinity, "as if" this or that were infinite, without actually ever using infinity itself. For instance, y = 1/x as x approaches infinity, then y approaches 0, but we don't actually USE infinity in any calculations, but we extrapolate. There is at the moment no credible alternative. There are attempts to build physics on constructive foundations (there are infinite objects but they can be constructed by algorithms). But not finitary principles, because to do physics you need the real numbers; and to construct the real numbers we need infinite sets. Hilbert pointed out there is a difference between boundless and infinite. For instance space is boundless as far as we can tell, but it isn't infinite in size and never will be until eternity arrives. Why can't we use the boundless assumption instead of full-blown infinity? 1) The rigorization of Newton's calculus culminated with infinitary set theory. Newton discovered his theory of gravity using calculus, which he invented for that purpose. I didn't know he developed calculus specifically to investigate gravity. Cool! It does make sense now that you mention it. However, it's well-known that Newton's formulation of calculus made no logical sense at all. If \(\Delta y\) and \(\Delta x\) are nonzero, then \(\frac{\Delta y}{\Delta x}\) isn't the derivative. And if they're both zero, then the expression makes no mathematical sense! But if we pretend that it does, then we can write down a simple law that explains apples falling to earth and the planets endlessly falling around the sun. I'm going to need some help with this one. If dx = 0, then it contains no information about the change in x, so how can anything result from it? I've always taken dx to mean a differential that is smaller than can be discerned, but still able to convey information. It seems to me that calculus couldn't work if it were based on division by zero, and that if it works, it must not be. What is it I am failing to see? I mean, it's not an issue of 0/0 making no mathematical sense, it's a philosophical issue of the nonexistence of significance because there is nothing in zero to be significant. 2) Einstein's gneral relativity uses Riemann's differential geometry. In the 1840's Bernhard Riemann developed a general theory of surfaces that could be Euclidean or very far from Euclidean. As long as they were "locally" Euclidean. Like spheres, and torii, and far weirder non-visualizable shapes. Riemann showed how to do calculus on those surfaces. 60 years later, Einstein had these crazy ideas about the nature of the universe, and the mathematician Minkowski saw that Einstein's ideas made the most mathematical sense in Riemann's framework. This is all abstract infinitary mathematics. Isn't this the same problem as previous? dx=0? 3) Fourier series link the physics of heat to the physics of the Internet; via infinite trigonometric series. In 1807 Joseph Fourier analyzed the mathematics of the distribution of heat through an iron bar. He discovered that any continuous function can be expressed as an infinite trigonometric series, which looks like this: $$f(x) = \sum_{n=0}^\infty a_n \cos(nx) + \sum_{n=1}^\infty b_n \sin(nx)$$ I only posted that because if you managed to survive high school trigonometry, it's not that hard to unpack. You're composing any motion into a sum of periodic sine and cosine waves, one wave for each whole number frequency. And this is an infinite series of real numbers, which we cannot make sense of without using infinitary math. I can't make sense of it WITH infinitary math lol! What's the cosine of infinity? What's the infnite-th 'a'? 4) Quantum theory is functional analysis . If you took linear algebra, then functional analysis can be thought of as infinite-dimensional linear algebra combined with calculus. Functional analysis studies spaces whose points are actually functions; so you can apply geometric ideas like length and angle to wild collections of functions. In that sense functional analysis actually generalizes Fourier series. Quantum mechanics is expressed in the mathematical framework of functional analysis. QM takes place in an infinite-dimensional Hilbert space. To explain Hilbert space requires a deep dive into modern infinitary math. In particular, Hilbert space is complete , meaning that it has no holes in it. It's like the real numbers and not like the rational numbers. QM rests on the mathematics of uncountable sets, in an essential way. Well, thanks to Hilbert, I've already conceded that the boundless is not the same as the infinite and if it were true that QM required infinity, then no machine nor human mind could model it. It simply must be true that open-ended finites are actually employed and underpin QM rather than true infinite spaces. Like Max said, "Not only do we lack evidence for the infinite but we don’t need the infinite to do physics. Our best computer simulations, accurately describing everything from the formation of galaxies to tomorrow’s weather to the masses of elementary particles, use only finite computer resources by treating everything as finite. So if we can do without infinity to figure out what happens next, surely nature can, too—in a way that’s more deep and elegant than the hacks we use for our computer simulations." We can *claim* physics is based on infinity, but I think it's more accurate to say *pretend* or *fool ourselves* into thinking such. Max continued with, "Our challenge as physicists is to discover this elegant way and the infinity-free equations describing it—the true laws of physics. To start this search in earnest, we need to question infinity. I’m betting that we also need to let go of it." He said, "let go of it" like we're clinging to it for some reason external to what is true. I think the reason is to be rid of god, but that's my personal opinion. Because if we can't have infinite time, then there must be a creator and yada yada. So if we cling to infinity, then we don't need the creator. Hence why Craig quotes Hilbert because his first order of business is to dispel infinity and substitute god. I applaud your effort, I really do, and I've learned a lot of history because of it, but I still cannot concede that infinity underpins anything and I'd be lying if I said I could see it. I'm not being stubborn and feel like I'm walking on eggshells being as amicable and conciliatory as possible in trying not to offend and I'm certainly ready to say "Ooooohhh... I see now", but I just don't see it. ps -- There's our buddy Hilbert again. He did many great things. William Lane Craig misuses and abuses Hilbert's popularized example of the infinite hotel to make disingenuous points about theology and in particular to argue for the existence of God. That's what I've got against Craig. Craig is no friend of mine and I was simply listening to a debate on youtube (I often let youtube autoplay like a radio) when I heard him quote Hilbert, so I dug into it and posted what I found. I'm not endorsing Craig lol 5) Cantor was led to set theory from Fourier series. In every online overview of Georg Cantor's magnificent creation of set theory, nobody ever mentions how he came upon his ideas. It's as if he woke up one day and decided to revolutionize the foundations of math and piss off his teacher and mentor Kronecker. Nothing could be further from the truth. Cantor was in fact studing Fourier's trigonometric series! One of the questions of that era was whether a given function could have more than one distinct Fourier series. To investigate this problem, Cantor had to consider the various types of sets of points on which two series could agree; or equivalently, the various sets of points on which a trigonometric series could be zero. He was thereby led to the problem of classifying various infinite sets of real numbers; and that led him to the discovery of transfinite ordinal and cardinal numbers. (Ordinals are about order in the same way that cardinals are about quantity). I still can't understand how one infinity can be bigger than another since, to be so, the smaller infinity would need to have limits which would then make it not infinity. In other words, and this is a fact that you probably will not find stated as clearly as I'm stating it here: If you begin by studying the flow of heat through an iron rod; you will inexorably discover transfinite set theory. Right, because of what Max said about the continuum model vs the actual discrete. Heat flow is actually IR light flow which is radiation from one molecule to another: a charged particle vibrates and vibrations include accelerations which cause EM radiation that emanates out in all directions; then the EM wave encounters another charged particle which causes vibration and the cycle continues until all the energy is radiated out. It's a discrete process from molecule to molecule, but is modeled as continuous for simplicity's sake. I've long taken issue with the 3 modes of heat transmission (conduction, convention, radiation) because there is only radiation. Atoms do not touch, so they can't conduct, but the van der waals force simply transfers the vibrations more quickly when atoms are sufficiently close. Convection is simply vibrating atoms in linear motion that are radiating IR light. I have many issues with physics and have often described it as more of an art than a science (hence why it's so difficult). I mean, there are pages and pages on the internet devoted to simply trying to define heat.https://www.quora.com/What-is-heat-1https://www.quora.com/What-is-meant-by-heathttps://www.quora.com/What-is-heat-in-physicshttps://www.quora.com/What-is-the-definition-of-heathttps://www.quora.com/What-distinguishes-work-and-heat Physics is a mess. What gamma rays are, depends who you ask. They could be high-frequency light or any radiation of any frequency that originated from a nucleus. But I'm digressing.... I do not know what that means in the ultimate scheme of things. But I submit that even the most ardent finitist must at least give consideration to this historical reality. It just means we're using averages rather than discrete actualities and it's close enough. I hope I've been able to explain why I completely agree with your point that infinities in physical equations don't imply the actual existence of infinities. Yet at the same time, I am pointing out that our best THEORIES of physics are invariably founded on highly infinitary math. As to what that means ... for my own part, I can't help but feel that mathematical infinity is telling us something about the world. We just don't know yet what that is. I think it means there are really no separate things and when an aspect of the universe attempts to inspect itself in order to find its fundamentals or universal truths, it will find infinity like a camera looking at its own monitor. Infinity is evidence of the continuity of the singular universe rather than an existing truly boundless thing. Infinity simply means you're looking at yourself. Anyway, great post! Please don't be mad. Everyone here values your presence and are intimidated by your obvious mathematical prowess Don't take my pushback too seriously I'd prefer if we could collaborate as colleagues rather than competing.