url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://math.stackexchange.com/questions/1398248/maximum-likelihood-estimates-question-using-bernoulli/1398294
|
# Maximum Likelihood Estimates Question Using Bernoulli
Suppose that $X$ is a discrete random variable with $P(X = 1) = p$ and $P(X = 2) = 1-p$. Three independent observations of $X$ are made: $x_1 =2, x_2 = 1, x_3 = 2$
a.) Write out likelihood as a function of $p$.
b.) Find the maximum likelihood estimator.
So I first recognized it as a Bernoulli distribution, and got the likelihood function = $3p(1-p)^2$
Then I derived it and found the max which was $p=1/3$. I'm not exactly sure if this approach was correct, can anyone help? Also, if I wanted to check if this was unbiased and consistent, how would I approach doing this? Thanks.
• First part: actually the likelihood is probably rather $p(1-p)^2$ if the observations come in order (but it is difficult to decide from what you write). Second part: to check unbiasedness and consistency, what are your ideas? – Did Aug 15 '15 at 15:13
The likelihood is
$$L(x_1,x_2,x_3\mid p) = \mathbb P(x_1=2)\mathbb P(x_2=1)\mathbb P(x_3=2) = p(1-p)^2.$$
To find the maximum of $L$ we first take the partial derivative with respect to $p$:
$$\frac\partial{\partial p} L(x_1,x_2,x_3\mid p) = (1-p)^2 - 2p(1-p) = (1-p)(1-p-2p) = (1-p)(1-3p).$$
Hence $$\frac\partial{\partial p}L(x_1,x_2,x_3\mid p) = 0 \implies p = 0\text{ or } p = \frac13.$$
We can assume that $p\ne0$ (otherwise the problem is trivial), so we have the candidate MLE $\hat p = \frac13$. Now, $L$ is decreasing on $(\frac13,1)$ (since then $\frac\partial{\partial p}L<0$), and $L$ is concave on $\left(0,\frac23\right)$, as can be seen by
$$\frac{\partial^2}{\partial p^2} L(x_1,x_2,x_3\mid p) = -(1-3p) - 3(1-p) = 2(3p-2).$$ So $\hat p=\frac13$ is the absolute maximum of $L$ on $(0,1)$, and is the maximum likelihood estimator.
Now, $$\mathbb E\left[X\mid p=\frac13\right] = 1\cdot\frac13 + 2\cdot\left(1-\frac13\right) = \frac53,$$ so the bias of $\hat p$ is $$\frac53 - \frac13 = \frac43.$$ Since $\hat p$ is constant with respect to $n$, it's clear that $\hat p$ is not consistent.
|
2019-10-14 06:56:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.949238121509552, "perplexity": 195.568008599846}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649232.14/warc/CC-MAIN-20191014052140-20191014075140-00128.warc.gz"}
|
http://www.bldnyilaszaro.hu/zs5ieo/can-you-put-letters-in-blue-box-3d24ad
|
Blue box letters dont work. If it falls off, you'll get the letter back in a couple of days with a "needs postage" message on it. Type Narrator settings and click on Narrator settings. The guide below shows you size and weight restrictions for large letters and how much postage will cost. Provided the change stays attached to the letter, I'd say, no problem. They don't have time for that stuff! If you drop it in a blue collection box or Post Office lobby mail slot, it will be returned to you… Please answer the below to assist you better: 1) When you say blue box around items, which items you are referring to? Update: I used a pre-paid returns label, so costs shouldn't be an issue. Another box appeared in Angel Row in the centre of Nottingham. CORONAVIRUS has led to tough government measures being put in place. This is nice because it's not limited by the tag the text is placed in like some of the other answers. I have a prepaid returns label - Can I pop the parcel into the post box, or must I take it to the post office? If you put the informal name in your letter, but you are actually looking for the formal name, the report will show that. We have organised them by type so you can find what you need more easily. Requesting a Certified Mail Receipt or other proof of delivery will add to the price. There are also blue boxes for priority mail (second and third options), but they can be a pain to find. If you print postage on-line you are not anonymous and your package no matter the weight can be put in a blue collection box or left on the counter. Blue Box Recycling Collection. VideoBrexit marked by the chimes of Big Ben, Gibraltar gets UK-Spain deal to keep open border, playFour protests that triggered change - and one that didn't. As a result, a report can replicate those requirements for a pre-letter run check. diy gift for my boyfriend for our 1st anniversary! Hope it helps, David . When you’re done entering your letters, simply hit the search button on the right. You need to turn off Narrator which you can do by going into settings > Ease of Access > Narator and turn off . The 13oz limit only applies if you are using postage stamps. Place your Blue Box at the curb with your garbage by 7 a.m. on your regular collection day. Much much better than the days of standing in line even when we used online postage (but had to hand carry in those international packages) ! Heck of a federal offense too if someone is that brazen. I DO know that this first part of the journey, after the box is emptied at 2:30 here, is that this is hand sorted. For Online Computer Support, Ask a Computer Technician. If not you can always go in to a Post Office and ask the to give you some customs labels & blue airmail … You can put the text in a \mbox and use \boxed. If any of you purchase a box from amazon with the correct amount of dividers please let me know so I can add it to this post’s recommendations. Large Letter stamp is just for large letters up to 100g. In the example, the letters are printed on the back of a double-sided sheet of patterned cardstock from EK Success. Also make sure that you’ve selected the right merge fields in your letter. Anyone else have a definite answer? Please click on accept as solution if answered your question i thought this because with the Under 15g ones they just give me a normal 87P UK stamp to put on it instead of a International stamp. I can't mail a package unless the post office is actually open. Video, Four protests that triggered change - and one that didn't, Lookahead 2021: Trips to Mars and the Arctic. Less than 25mm width. Take you mail int… Heck of a federal offense too if someone is that brazen. Large Letter stamp is just for large letters up to 100g. I finally decided to take the plunge and complete this project as well, and let me say it was a wonderful experience! You need to turn off Narrator which you can do by going into settings > Ease of Access > Narator and turn off . Boxes and bags are also available for pick up at the following locations. AND he also said this is where things can go array... : a package in going to Mn not Ms and accidently gets thrown in the wrong bin and takes that perverbial vacation we have heard about... off the subject, but useful info for all... Sandi In Central Oregon. yes you may drop it in the blue box if it has an online purcchased postage/customs form on it. Does it not get delivered … IIRC, the form takes alot of time to get into the system, far longer than the standard 1-2 days for delivery of a local first-class letter, but nevertheless there is (i think) a mechanism to do that. Yes, of course you can. Any package or letter with wires protruding or with any substance or smell leaking or sifting out. Basically a standard DVD case in a Jiffy bubble-wrap envelope. I've started a new document, and when I paste text from another document into it, the pasted text has a blue box around it, as shown in the picture. Then creating a DIY alphabet letter box using toys and items you have at home is simple. Follow the steps below: 1. It is important to recycle the right things in your blue cart and community recycling depots, and properly prepare your materials.. You can also do this by pressing and holding WinKey+CTRL+Enter. What happens to your body in extreme heat? 0. "Open When" Letters . If you open up the drawer, you should be able to see the description of this limit. - Answered by a verified Tech Support Specialist. Since they’re handcrafted with deep shadow box mouldings, our varsity letter frames have enough depth to accommodate pins, patches, and other small trinkets. The paint jobs, spotted in the centres of Southampton, Leeds, Nottingham and Taunton, sparked speculation among locals. Victory is yours. “Don’t be a victim. Sandi in Central Oregon. Post boxes were painted gold in the home locations of gold medal winners in the 2012 London Olympics. The BBC is not responsible for the content of external sites. Berties Assetts Posts: 1,532. USPS, Re: Can you put International packages in the BLUE BOXES anymore? Hope this helps. So it's too late to retrieve the letter now, what do you reckon will happen to it? Computer . Thanks. And if you're in a situation where you can't really change the html (for instance if you're using the same style sheet on multiple pages and need to make a retroactive change) this could be helpful. You can also do this by pressing and holding WinKey+CTRL+Enter. I'm just trying to protect myself and not lose my money. I do not want it mailed. 2) Packages with postage printed using Click-N-Ship or another PC Postage provider may weigh more than one pound, but must fit in the collection box. If you don't see the Narrator in the taskbar, try checking the Task Manager under the Details tab (Crtl + Shift + Esc opens the task manager. Provided it doesn't leak, smell or have wires protruding that is. You can print postage online for anything that you can fit in the post box, then you don't need to go to the post office. Yes, you can open the Fancy Letter Generator website on your device and generate stylish and cool Fancy Letters from here. We not only have what logo is a blue box with white letters in it logos but many more! I put it in the USPS mailbox at my local post office. So, they will see your international package and put it in the correct bin hopefully. Once you've got all that right you can then post it in a regular post-box. Please click on accept as solution if answered your question Example: \begin{document} $\boxed{\sin \theta \mbox{blablablablablabla}}$ \end{document} This should produce a box around both sin theta and blablablablablabla. It has a blue box around what it is reading aloud; if your volume is muted, you may not notice that. Add message | Report | See all. Saker Thu 12-Dec-13 09:37:41. But after some noticed the locations were all due to host matches in the forthcoming Cricket World Cup, it wasn't long until the secret was out. Your blue cart is for acceptable household paper, cardboard and container packaging. 3. You are fine, i say... what are they going to do... have it sent back to your place? I seem to recall (checking for cite now) that you can send a form into the Postmaster to ask him to “stop delivery” of a piece of mail. You may also use the Post Office Self-Service Kiosk to buy stamps and drop your package in the lobby package slot. About a month ago I heard fellow missionary girlfriends talking about "Open When" letters to send their missionaries. Each PO seems to be different. I need the letter back. Examples remain at Castlefield, Manchester and outside Windsor Castle. The regular blue boxes are technically only for the smallest shipping option, but I always put everything in there. Residents can put the following items in their Blue Bin (recycling bin). Read about our approach to external linking. Include up to three “?” as wildcards. Can't buy a stamp without getting in line. You’ll see two ways here to insert a text box, both of which add a text box in the same way. My post office even took the package drop box out of the lobby, along with the stamp machine. If your letter is over 13 ounces, but has prepaid mailing labels on it, as long as it fits in the collection box, you can … For toddlers and preschoolers you can use this box when preparing for your letter of the week activities. If you don’t know what Cricut Access is, make sure to read this great guide I put together.. However, some of them need to be purchased prior to cutting your project. The actual rule about what you can put in the blue boxes is the same as for flat-rate priority Mail boxes: If it fits, it ships. Usually I use paypal labels but I have just put regular 1st class stamps on them & they have all got to their destination. yep its legal & if you get bitten by a dog when your hands in the letter box you can sue the owner too. The current price of a First-Class Mail® Forever® letter stamp is $0.55 and$0.35 for a postcard. In this section, you can find our many post boxes for your house. I have a small 4oz package to mail out today to Canada but remembered seeing somewhere you couldn't mail anything larger than letter sized in the USPS boxes anymore. Look for a notice on the blue PO box. 0. Post boxes were painted gold in the home locations of gold medal winners in the 2012 London Olympics. That's just nutty. The program provides residents with one CRD blue box and two blue bags per home. The guide below shows you size and weight restrictions for large letters and how much postage will cost. But my parcel is just a tiny bit over that limit, and so fits into the post box. I am a rural mailcarrier and do this for customers several times a day. Indeed you can but this will expose you to a uniquely British fear - does the post box I have just put my incredibly important letter into work?
|
2021-02-24 22:33:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26709800958633423, "perplexity": 1774.6105831059283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178349708.2/warc/CC-MAIN-20210224223004-20210225013004-00250.warc.gz"}
|
https://paperswithcode.com/paper/a-high-performance-reconfigurable-fully
|
# A High-Performance, Reconfigurable, Fully Integrated Time-Domain Reflectometry Architecture Using Digital I/Os
1 May 2021 · , , , , ·
Time-domain reflectometry (TDR) is an established means of measuring impedance inhomogeneity of a variety of waveguides, providing critical data necessary to characterize and optimize the performance of high-bandwidth computational and communication systems. However, TDR systems with both the high spatial resolution (sub-cm) and voltage resolution (sub-$\muV$) required to evaluate high-performance waveguides are physically large and often cost-prohibitive, severely limiting their utility as testing platforms and greatly limiting their use in characterizing and trouble-shooting fielded hardware. Consequently, there exists a growing technical need for an electronically simple, portable, and low-cost TDR technology. The receiver of a TDR system plays a key role in recording reflection waveforms; thus, such a receiver must have high analog bandwidth, high sampling rate, and high-voltage resolution. However, these requirements are difficult to meet using low-cost analog-to-digital converters (ADCs). This article describes a new TDR architecture, namely, jitter-based APC (JAPC), which obviates the need for external components based on an alternative concept, analog-to-probability conversion (APC) that was recently proposed. These results demonstrate that a fully reconfigurable and highly integrated TDR (iTDR) can be implemented on a field-programmable gate array (FPGA) chip without using any external circuit components. Empirical evaluation of the system was conducted using an HDMI cable as the device under test (DUT), and the resulting impedance inhomogeneity pattern (IIP) of the DUT was extracted with spatial and voltage resolutions of 5 cm and 80 $\muV$, respectively. These results demonstrate the feasibility of using the prototypical JAPC-based iTDR for real-world waveguide characterization applications
PDF Abstract
## Code Add Remove Mark official
No code implementations yet. Submit your code now
## Datasets
Add Datasets introduced or used in this paper
## Results from the Paper Add Remove
Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.
|
2023-02-09 11:32:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.389934778213501, "perplexity": 4853.440156439321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499966.43/warc/CC-MAIN-20230209112510-20230209142510-00592.warc.gz"}
|
https://leanprover-community.github.io/mathlib_docs/probability_theory/integration.html
|
# mathlibdocumentation
probability_theory.integration
# Integration in Probability Theory #
Integration results for independent random variables. Specifically, for two independent random variables X and Y over the extended non-negative reals, E[X * Y] = E[X] * E[Y], and similar results.
theorem probability_theory.lintegral_mul_indicator_eq_lintegral_mul_lintegral_indicator {α : Type u_1} {Mf : measurable_space α} [M : measurable_space α] {μ : measure_theory.measure α} (hMf : Mf M) (c : ℝ≥0∞) {T : set α} (h_meas_T : M.measurable_set' T) (h_ind : μ) {f : α → ℝ≥0∞} (h_meas_f : measurable f) :
∫⁻ (a : α), (f a) * T.indicator (λ (_x : α), c) a μ = (∫⁻ (a : α), f a μ) * ∫⁻ (a : α), T.indicator (λ (_x : α), c) a μ
This (roughly) proves that if a random variable f is independent of an event T, then if you restrict the random variable to T, then E[f * indicator T c 0]=E[f] * E[indicator T c 0]. It is useful for lintegral_mul_eq_lintegral_mul_lintegral_of_independent_measurable_space.
theorem probability_theory.lintegral_mul_eq_lintegral_mul_lintegral_of_independent_measurable_space {α : Type u_1} {Mf Mg : measurable_space α} [M : measurable_space α] {μ : measure_theory.measure α} (hMf : Mf M) (hMg : Mg M) (h_ind : μ) (f g : α → ℝ≥0∞) (h_meas_f : measurable f) (h_meas_g : measurable g) :
∫⁻ (a : α), (f a) * g a μ = (∫⁻ (a : α), f a μ) * ∫⁻ (a : α), g a μ
This (roughly) proves that if f and g are independent random variables, then E[f * g] = E[f] * E[g]. However, instead of directly using the independence of the random variables, it uses the independence of measurable spaces for the domains of f and g. This is similar to the sigma-algebra approach to independence. See lintegral_mul_eq_lintegral_mul_lintegral_of_independent_fn for a more common variant of the product of independent variables.
theorem probability_theory.lintegral_mul_eq_lintegral_mul_lintegral_of_indep_fun {α : Type u_1} [M : measurable_space α] (μ : measure_theory.measure α) (f g : α → ℝ≥0∞) (h_meas_f : measurable f) (h_meas_g : measurable g) (h_indep_fun : μ) :
∫⁻ (a : α), (f * g) a μ = (∫⁻ (a : α), f a μ) * ∫⁻ (a : α), g a μ
This proves that if f and g are independent random variables, then E[f * g] = E[f] * E[g].
|
2021-04-16 19:38:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7255540490150452, "perplexity": 2011.6463911329188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038089289.45/warc/CC-MAIN-20210416191341-20210416221341-00279.warc.gz"}
|
https://www.khanacademy.org/math/algebra2/rational-expressions-equations-and-functions/multiplying-and-dividing-rational-expressions/v/multiplying-and-dividing-rational-expressions-2
|
# Multiplying rational expressions
CCSS Math: HSA.APR.D.7
## Video transcript
Multiply and express as a simplified rational. State the domain. Let's multiply it, and then before we simplify it, let's look at the domain. This is equal to, if we just multiplied the numerators, a squared minus 4 times a plus 1, all of that over-- multiply the denominators-- a squared minus 1 times a plus 2. Now, the a squared minus 4 and the a squared minus 1 might look familiar to us. These are the difference in squares, a special type of binomial that you could immediately, or hopefully maybe immediately recognize. It takes the form a squared minus b squared, difference of squares, and it's always going to be equal to a plus b times a minus b. We can factor this a squared minus 4 and we can also factor this a squared minus 1, and that'll help us actually simplify the expression or simplify the rational. This top part, we can factor the a squared minus 4 as a plus 2-- 2 squared is 4-- times a minus 2, and then that times a plus 1. Then in the denominator, we can factor a squared minus 1-- let me do that in a different color. a squared minus 1 we can factor as a plus 1 times a minus 1. If you ever want to say, hey, why does this work? Just multiply it out and you'll see that when you multiply these two things, you get that thing right there. Then in the denominator, we also have an a plus 2. We've multiplied it, we've factored out the numerator, we factored out the denominator. Let's rearrange it a little bit. So this numerator, let's put the a plus 2's first in both the numerator and the denominator. So this, we could get a plus 2 in the numerator, and then in the denominator, we also have an a plus 2. In the numerator, we took care of our a plus 2's. That's the only one that's common, so in the numerator, we also have an a minus 2. Actually, we have an a plus 1-- let's write that there, too. We have an a plus 1 in the numerator. We also have an a plus 1 in denominator. In the numerator, we have an a minus 2, and in the denominator, we have an a minus 1. So all I did is I rearranged the numerator and the denominator, so if there was something that was of a similar-- if the same expression was in both, I just wrote them on top of each other, essentially. Now, before we simplify, this is a good time to think about the domain or think about the a values that aren't in the domain, the a values that would invalidate or make this expression undefined. Like we've seen before, the a values that would do that are the ones that would make the denominator equal 0. So the a values that would make that equal to 0 is a is equal to negative 2. You could solve for i. You could say a plus 2 is equal to 0, or a is equal to negative 2. a plus 1 is equal to 0. Subtract 1 from both sides. a is equal to negative 1. Or a minus 1 is equal to 0. Add one to both sides, and you get a is equal to 1. For this expression right here, you have to add the constraint that a cannot equal negative 2, negative 1, or 1, that a can be any real number except for these. We're essentially stating our domain. We're stating the domain is all possible a's except for these things right here, so we'd have to add that little caveat right there. Now that we've done that, we can factor it. We have an a plus 2 over an a plus 2. We know that a is not going to be equal to negative 2, so that's always going to be defined. When you divide something by itself, that is going to just be 1. The same thing with the a plus 1 over the a plus 1. That's going to be 1. All you're going to be left with is an a minus 2 over a minus 1. So the simplified rational is a minus 2 over a minus 1 with the constraint that a cannot equal negative 2, negative 1, or 1. You're probably saying, Sal, what's wrong with it equaling, for example, negative 1 here? Negative 1 minus 1, it's only going to be a negative 2 here. It's going to be defined. But in order for this expression to really be the same as this expression up here, it has to have the same constraints. It has to have the same domain. It cannot be defined at negative 1 if this guy also is not defined at negative 1. And so these constraints essentially ensure that we're dealing with the same expression, not one that's just close.
|
2019-02-20 03:42:17
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8472952246665955, "perplexity": 244.07982155730463}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494424.70/warc/CC-MAIN-20190220024254-20190220050254-00564.warc.gz"}
|
https://britconsulting.co/commiphora-gileadensis-hpt/61d03f-55-cancri-mass
|
The component stars are separated by a mean distance of about 1,150 astronomical units (172 billion km, or 108 billion mi). In the case of 55 Cancri d, this lower limit was around 4 … 55 Cancri e (abbreviated 55 Cnc e, formally named Janssen /ˈdʒænsən/) is an exoplanet in the orbit of its Sun-like host star 55 Cancri A. Mass, radius, and composition of the transiting planet 55 Cnc e : using interferometry and correlations - A quick update ... 55 Cancri: Stellar Astrophysical Parameters, a Planet in the Habitable Zone, and Implications for the Radius of a Transiting Super-Earth 2011 von BRAUN K., BOYAJIAN T., ten BRUMMELAAR Th., van BELLE G., KANE S. & 11 additional authors ApJ. Discovery. The ancients debated the existence of planets beyond our own; now we know of thousands. 55 Cancri (55 Cnc), aussi nommée Rho 1 Cancri, est une étoile binaire située à une distance d'environ ∼40 a.l. 55 Cancri A has an apparent magnitude of 5.95, making it just visible to the naked eye under very dark skies. A limitation of the radial velocity method used to detect 55 Cancri e is that only a minimum mass can be obtained, in this case around 14.2 times that of Earth, or 80% of the mass of Neptune. [16] Infrared mapping with the Spitzer Space Telescope indicated an average front-side temperature of 2,573 K (2,300 °C; 4,172 °F) and an average back-side temperature of around 1,644 K (1,371 °C; 2,500 °F). It is faint to the naked eye. 55 Cnc-e has the smallest measured minimum mass, only that of 8.6 Earths, less than 2/3 or that of Uranus or Neptune, its orbit just 4 percent the size that of Mercury. Its mass is 0.141 Jupiters, it takes 262 days to complete one orbit of its star, and is 0.788 AU from its star. 55 Cancri e (abbreviated 55 Cnc e, also named Janssen), is an exoplanet in the orbit of its Sun-like host star 55 Cancri A.The mass of the exoplanet is about 8.63 Earth masses and its diameter is about twice that of the Earth, thus classifying it as the first super-Earth discovered around a main sequence star, predating Gliese 876 d by a year. Die stelsel bestaan uit ’n G-tipe hoofreeksster en ’n kleiner rooi dwerg.Hulle is meer as 1 000 AE van mekaar af (duisend keer die afstand tussen die Aarde en die Son). This is one of the few planetary transits to be confirmed around a well-known star, and allowed investigations into the planet's composition. The mass of the exoplanet is about 8.63 Earth masses and its diameter is about twice that of the Earth, thus classifying it as the first super-Earth discovered around a main sequence star, predating Gliese 876 d by In 2011, a transit of the planet was confirmed, allowing scientists to calculate its density. 55 Cancri is ’n dubbelster sowat 41 ligjare van die Aarde af in die sterrebeeld Kreef (Cancer). At eight times Earth’s mass and almost twice its radius, it’s classified as a hot super-Earth, bigger than Earth but smaller than Neptune. Kristen Walbolt Explore an interactive gallery of some of the most intriguing and exotic planets discovered so far. 2019 SATYAL S. & CUNTZ M. PASJ, accepted It is around 40 light years away from Earth. 55 Cancri e is located in a very close orbit around the star which takes less than three days to complete and falls into the category of "hot Neptunes". In order for this mechanism to have taken effect, it is necessary for 55 Cancri e to have become tidally locked before losing the totality of its hydrogen envelope. 55 Cancri is a mid-sixth magnitude star (magnitude 5.95) class G (G8) dwarf 41 light years away. In 2008, Fischer et al. Manager: 55 Cancri e is a super-Earth exoplanet that orbits a G-type star similar to our Sun. Despite their … A planetary tour through time. [9] In December 2015, the IAU announced the winning name was Janssen for this planet. The 55 Cancri system is located fairly close to the Solar System: the Gaia astrometry satellite measured the parallax of 55 Cancri A as 79.4274 milliarcseconds, corresponding to a distance of 12.59 parsecs (41.06 light years). At first it was suspected to be a water planet. The red dwarf 55 Cancri B is of the 13th magnitude and only visible through a telescope. arxiv. Application of the Titius-Bode Rule to the 55 Cancri System: … [6] A limitation of the radial velocity method used to detect 55 Cancri f is that only a minimum mass can be obtained, in this case around 0.144 times that of Jupiter, or half the mass of Saturn. A so-called super-Earth boasting about twice the Earth’s diameter and eight times Earth’s mass, the “diamond planet,” whose official designation is 55 Cancri e, is the smallest member of a five-planet system located in the constellation Cancer. Discovery. This set of travel posters envision a day when the creativity of scientists and engineers will allow us to do things we can only dream of now. This model is consistent with spectroscopic measurements claiming to have discovered the presence of hydrogen[21][22] and with other studies which were unable to discover a significant hydrogen-destruction rate. Its discovery was announced in 2004. There are a ton of good reasons as outlined in other answers, but one that most seemed to miss is that diamonds are virtually worthless. A so-called super-Earth boasting about twice the Earth’s diameter and eight times Earth’s mass, the “diamond planet,” whose official designation is 55 Cancri e, is the smallest member of a five-planet system located in the constellation Cancer. It takes less than 18 hours to complete an orbit and is the innermost-known planet in its planetary system. It is visible through binoculars, but its binary companion, 55 Cancri B, a red dwarf of 13th magnitude, is only visible in a telescope. Its mass is about eight times the mass of Earth, and more than one third of it is believed to be diamond. 55 Cancri A má zdánlivou hvězdnou velikost 5,95 a je viditelná pouhým okem za velmi tmavé oblohy. [27], Large surface-temperature variations on 55 Cancri e have been attributed to possible volcanic activity releasing large clouds of dust which blanket the planet and block thermal emissions. 55 Cancri A is the primary star of 55 Cancri. The component stars are separated by a mean distance of about 1,150 astronomical units (172 billion km, or 108 billion mi). 55 Cancri b (abbreviated 55 Cnc b and occasionally referred to as 55 Cancri Ab in order to distinguish it from the star 55 Cancri B) is an extrasolar planet orbiting the Sun-like star 55 Cancri A every 14.65 days. Its discovery was announced in 2007. Astrometric obse… Transit shows that the axis of this planet is 83.4 ± 1.7, so the actual set is near the minimum. The radial velocity method used to detect 55 Cancri e obtains the minimum mass of 7.8 times that of Earth,[4] or 48% of the mass of Neptune. The system consists of a K-type star (designated 55 Cancri A, also named Copernicus / k oʊ ˈ p ɜːr n ɪ k ə s /) and a smaller red dwarf (55 Cancri B). 55 Cancri is a binary star system located 41 light-years away from the Sun in the zodiac constellation of Cancer.It has the Bayer designation Rho 1 Cancri (ρ 1 Cancri); 55 Cancri is the Flamsteed designation (abbreviated 55 Cnc). This … It is the fourth extrasolar world discovered overall. [17][23], In February 2016, it was announced that NASA's Hubble Space Telescope had detected hydrogen and helium (and suggestions of hydrogen cyanide), but no water vapor, in the atmosphere of 55 Cancri e, the first time the atmosphere of a super-Earth exoplanet was analyzed successfully. 55 Cancri e (abbreviated 55 Cnc e, also named Janssen), is an exoplanet in the orbit of its Sun-like host star 55 Cancri A. 55 Cancri e is a planet that is partially made of diamonds. published a new analysis that appeared to confirm the existence of the 2.8-day planet and the 260-day planet. [13] The transits occur with the period (0.74 days) and phase that had been predicted by Dawson & Fabrycky. The mass of the exoplanet is about 8.63 Earth masses and its diameter is about twice that of the Earth,[4] thus classifying it as the first super-Earth discovered around a main sequence star, predating Gliese 876 d by a year. 55 Cancri e is a super-Earth — about twice our planet's size — that zooms around its star in 18 days. 55 Cancri is a visual binary star system in the constellation Cancer consisting of a middle-aged, Sun-like primary of high metallicity, 55 Cancri A, and a red dwarf companion, 55 Cancri B. This planet is also a … Retrieved from "https://astronomical.fandom.com/wiki/55_Cancri_c?oldid=7211" [citation needed], It was initially unknown whether 55 Cancri e was a small gas giant like Neptune or a large rocky terrestrial planet. It is one of five planets orbiting a sun-like star, 55 Cancri, that is located 40 light years from Earth yet visible to the naked eye in the constellation of Cancer. The mass of the exoplanet is about 8.63 Earth masses and its diameter is about twice that of the Earth, thus classifying it as the first super-Earth discovered around a main sequence star, predating Gliese 876 d by a year. The 55 Cancri System: Fundamental Stellar Parameters, Habitable Zone Planet, and Super-Earth Diameter 2011 von BRAUN K., BOYAJIAN TABETA S., ten BRUMMELAAR T., van BELLE G., KANE S. et al. [15] The side of the planet facing its star has temperatures more than 2,000 kelvin (approximately 1,700 degrees Celsius or 3,100 Fahrenheit), hot enough to melt iron. 55 Cancri e is an extrasolar planet with a mass similar to that of Neptune orbiting the Sun-like star 55 Cancri A.It takes less than three days to complete an orbit and is the innermost known planet in its planetary system. The radial velocity method used to detect 55 Cancri e obtains the minimum mass of 7.8 times that of Earth, or 48% of the mass of Neptune. Finally, our direct value for 55 Cancri's stellar radius allows for a model-independent calculation of the physical diameter of the transiting super-Earth 55 Cnc e ($\sim 2.05 \pm 0.15 R_{\earth}$), which, depending on the planetary mass assumed, implies a bulk density of 0.76 $\rho_{\earth}$ or 1.07 $\rho_{\earth}$. It marks one of the celestial crab’s legs. 55-Cancri The aim of our study is to explore the possible existence of Earth-mass planets in the habitable zone of 55~Cancri, an effort pursued based on detailed orbital stability simulations. It was announced at the same time as another "hot Neptune" orbiting the red dwarf star Gliese 436 named Gliese 436 b. The same measurements were used to confirm the existence of the uncertain planet 55 Cancri c. 55 Cancri e was one of the first extrasolar planets with a mass comparable to that of Neptune to be discovered. 55 Cancri e (abbreviated 55 Cnc e, also named Janssen), is an exoplanet in the orbit of its Sun-like host star 55 Cancri A. This innermost planet is … Its mass is 8.08 Earths, it takes 0.7 days to complete one orbit of its star, and is 0.01544 AU from its star. Like the majority of known extrasolar planets, 55 Cancri e was discovered by detecting variations in its star's radial velocity. [11], Like the majority of extrasolar planets found prior to the Kepler mission, 55 Cancri e was discovered by detecting variations in its star's radial velocity. Tarf is also orbited by an exoplanet, which was discovered in 2014.This massive planet has a minimum mass of around 7.8 times that of Jupiter, and an orbital period of 705 days.. Tarf / Beta Cancri is the brightest star in the zodiacal constellation of Cancer, the celestial crab. 55 Cancri f is located about 0.781 AU away from the star and takes 260 days to complete a full orbit. Anya Biferno. At 6th magnitude, yellow dwarf 55 Cancri A is approximately the same mass as our Sun, and it rotates in 42 days. However, the 2.8-day planet was shown to be an alias by Dawson and Fabrycky (2010); its true period was 0.7365 days. Tarf is also orbited by an exoplanet, which was discovered in 2014.This massive planet has a minimum mass of around 7.8 times that of Jupiter, and an orbital period of 705 days.. The exotic exoplanet, 55 Cancri e, is over eight times the mass of Earth and has previously been dubbed the ‘diamond planet’ because models based on its mass and radius have led some astronomers to speculate that its interior is carbon-rich. At the time of its discovery, three other planets were known orbiting the star. Theoretical Studies of Comets in the 55 Cancri System 2020 DVORAK R., LOIBNEGGER B. Its mass is 0.141 Jupiters, it takes 262 days to complete one orbit of its star, and is 0.788 AU from its star. Location. 55 Cancri e is a super-Earth exoplanet that orbits a G-type star similar to our Sun. Its host star is located in the constellation of The Crab. Site Editor: & CUTZ M. MNRAS, 496, 4979 paper arxiv. The two components are separated by an estimated distance of 1065 AU (6.15 light days). Its discovery was announced in 2007. 2004 derive a mass 17.7 ± 5.57 M Earth for the planet 55 Cnc e. Related publications Demonstrating high-precision photometry with a CubeSat: ASTERIA observations of 55 Cancri e 55 Cancri e was discovered on 30 August 2004. However, until the 2010 observations and recalculations, this planet had been thought to take about 2.8 days to orbit the star. The simulation cancri.gsim uses Gravity Simulator to illustrate the 55 Cancri system, and to investigate the dynamical interactions its planets may have on each other. 55 Cancri is an exoplanet in the orbit of its Sun-like host star 55 Cancri A. 08 Sep 2004: If coplanarity is assumed with the planet 55 Cnc d, McArthur et al. [28][29], Coordinates: 08h 52m 35.8s, +28° 19′ 51″, Artist's impression of 55 Cancri e near its host star, X-ray and ultraviolet irradiation would destroy it, "Oozing Super-Earth: Images of Alien Planet 55 Cancri e", "First detection of super-earth atmosphere", NameExoWorlds: An IAU Worldwide Contest to Name Exoplanets and their Host Stars, Final Results of NameExoWorlds Public Vote Released, 55 Cancri e – Exoplanet Exploration: Planets Beyond our Solar System, Monthly Notices of the Royal Astronomical Society, "NASA Space Telescope Sees the Light from an Alien Super-Earth", "Hint of a transiting extended atmosphere on 55 Cancri b", "Nearby Super-Earth Likely a Diamond Planet", "A Search for Water in a Super-Earth Atmosphere: High-resolution Optical Spectroscopy of 55Cancri e", "High-energy environment of super-Earth 55 Cancri e - I. Far-UV chromospheric variability as a possible tracer of planet-induced coronal rain", "A primeira detecção da composição atmosférica de uma super-Terra", A Case for an Atmosphere on Super-Earth 55 Cancri e - Astrobiology, Lava or Not, Exoplanet 55 Cancri e Likely to have Atmosphere, Astronomers May Have Found Volcanoes 40 Light-Years From Earth, Spitzer Detects a Steaming Super-Earth Eclipsing Its Star, Interactive visualisation of the 55 Cancri system, https://en.wikipedia.org/w/index.php?title=55_Cancri_e&oldid=992798599, Articles with dead external links from June 2020, Articles with permanently dead external links, Short description is different from Wikidata, Articles with unsourced statements from June 2020, Creative Commons Attribution-ShareAlike License, 0.01544 ± 0.00005 AU (2,309,800 ± 7,500 km), 2,709 K (2,436 °C; 4,417 °F) (average maximum), This page was last edited on 7 December 2020, at 04:11. The discovery of a Neptune-mass planet in an sub-Mercury orbit around nearby Sun-like star 55 Cancri, announced yesterday along with the discovery of other similar systems, gives a new indication that planetary systems as complex as our own Solar System likely exist elsewhere. 55 Cnc c '' ) is an 55 cancri mass in the 55 Cancri that. Each planets ' mass by assuming that i = 90 degrees star is located in constellation. Voting for the new names mi ) zdánlivou hvězdnou velikost 5,95 a je viditelná okem... Announced at the time of its discovery, three other planets were known orbiting the red dwarf 55 e. ( die geel dwerg ), wentel a well-known star, and the. Repository useful dayside where X-ray and ultraviolet irradiation would destroy it are practically dirt cheap so it has a around! Observations are necessary to confirm the nature of the 2.8-day planet and the 260-day planet the Sun-like star Cancri. The lava below hot Jupiter '' an orbit around 55 Cancri e is a super-Earth about. Brennan Site Editor: Kristen Walbolt Manager: Anya Biferno Site Editor: Kristen Walbolt:! Same mass as our Sun, and it rotates in 42 days are separated by an estimated of! Jupiter '' the Sun-like star 55 Cancri d, discovered in 2007, has a surface temperature nearly. Phase that had been thought to take about 2.8 days to orbit the star 's radial.. … 55 Cancri b has a surface temperature of nearly 4,900 degrees Fahrenheit ( 2,700 degrees Celsius.. A new analysis that appeared to confirm the nature of 55 cancri mass Netherlands eight... 260 days to orbit the star and takes 260 days to complete an orbit and is the star. Diameter of Earth and an orbital period of 0.74 days ) and phase that had been by. Complete an orbit around its host star 55 Cancri is the star that a. Radius is 2 Earths we know of thousands zooms around its host star Cancri... Be a water planet 1.7, so the actual set is near the minimum value each. Hot Neptune '' orbiting the red dwarf star Gliese 436 b in sterrebeeld. The Doppler shift of the Doppler shift of the crab an orbit and considered... Own ; now we know of thousands transits to be diamond were known orbiting star... Planete bevestig wat om die hoofster, 55 Cancri là một hệ sao đôi cách Mặt 41. Constellation of Cancer, the celestial crab ’ s legs of Cancer, the of... ; now we know of thousands nomination and voting for the new names mean distance of about astronomical... Achieved by making sensitive measurements of the planet is 83.4 ± 1.7 so... Same mass as our Sun about 2.8 days to complete a full orbit assumed the! In 42 days Writer: Pat Brennan Site Editor: Kristen Walbolt Manager: Anya Biferno zooms around star! A transit of the Netherlands about twice our planet 's composition in the zodiacal of. Occur with the period ( 0.74 days, so the real mass is about ±. G ( G8 ) dwarf 41 light years away from the star 's radial velocity spectrum. Reflecting the lava below of Earth and an orbital period of 0.74 days ), designated. Is assumed with the period ( 0.74 days ), is a —. Discovered on 30 August 2004 a … 55 Cancri là một hệ sao đôi cách Trời! In 2007, has a mass 8.3 times the mass of Earth, and those diamonds are dirt!: Kristen Walbolt Manager: Anya Biferno mass around 0.824 that of Jupiter 's mass its Sun-like star... Zone of 55~Cancri occur with the planet radiation than Gliese 436 named Gliese 436 Gliese... Tidally locked, meaning that there is a permanent night side were known orbiting the star 's designation.Its. This simulation uses the minimum an orbital period of 260.7 days star Gliese 436 b about 83.4 1.7... Mass of around 0.1714 that of Jupiter destroy it same time as another hot Jupiter '' list 55 f. 496, 4979 paper arxiv 45.8 times the mass of Earth and considered! However, until the 2010 observations and recalculations, this planet sao đôi Mặt... Days ) and phase that had been thought to take about 2.8 days to complete full. Die sterrebeeld Kreef ( Cancer ) the actual set is near the minimum find the repository! Za velmi tmavé oblohy M. MNRAS, 496, 4979 paper arxiv time of discovery... Planet was confirmed, allowing scientists to calculate its density billion km, 108. E is a gas giant, so it has a mass 45.8 times the mass Earth. Wat om die hoofster, 55 Cancri is the brightest star in about 44.! ( magnitude 5.95 ) class G ( G8 ) dwarf 41 light years away so has! ( 6.15 light days ) the nature of the 13th magnitude and only visible through a telescope occur with period! The dayside where X-ray and ultraviolet irradiation would destroy it 1065 AU ( light! Take about 2.8 days to complete a full orbit in 42 days magnitude 5.95! 3 ] in October 2012, it has a mass 8.3 times the mass of around that..., and is the innermost-known planet in orbit around its star in the orbit of 55 cancri mass Sun-like star. Zdánlivou hvězdnou velikost 5,95 a je viditelná pouhým okem za velmi tmavé oblohy real mass is to. Hệ sao đôi cách Mặt Trời 41 năm ánh sáng trong chòm sao cung hoàng đạo.... And telescope pioneer Zacharias Janssen giant, it has no solid surface đạo Cancri. real mass is to... ) dwarf 41 light years away about 0.781 AU away from the star and takes 260 days to complete orbit! In 2002, has a mass of Earth, and is the innermost-known planet in around. Celestial crab ánh sáng trong chòm sao cung hoàng đạo Cancri. in! Association for Meteorology and Astronomy of the Netherlands, LOIBNEGGER b time of discovery. Probably 103 percent ) of Jupiter, and allowed investigations into the planet is super-Earth. This disparity, the existence of the Netherlands are separated by a mean distance of about 1,150 astronomical units 172! Its host star 55 Cancri e is a gas giant, it was announced the! Eye under very dark skies orbiting the red dwarf star Gliese 436 b yellow dwarf 55 Cancri e discovered. Km, or 108 billion mi ) 103 percent ) of Jupiter mass... Its Sun-like host star is located about 0.781 AU away from the star takes. Fahrenheit ( 2,700 degrees Celsius ) is the innermost-known planet in orbit around its 's. Investigations into the planet is orbiting a Sun-like star 55 Cancri d, discovered 2002! Take about 2.8 days to complete a full orbit announced at the time of its Sun-like host star in days... Science Writer: Pat Brennan Site Editor: Kristen Walbolt Manager: Anya Biferno Mặt. The component stars are separated by a mean distance of about 1,150 astronomical units 172... Its host star 55 Cancri a has an apparent magnitude of 5.95, making just! 5218 days of Cancer, the celestial crab ’ s legs [ 10 ] transits. New names 41 ligjare van die Aarde af in die sterrebeeld Kreef ( Cancer ) 6.15 days!, discovered in 2002, has a mass 1218.9 times the mass of and. Public nomination and voting for the new names d, McArthur et al the (! Transit of the celestial crab binaire située à une distance d'environ ∼40 a.l degrees ). Has no solid surface orbiting a Sun-like star 55 Cancri e, also as. Sun, and is often referred to as super-Earth the dayside where X-ray and ultraviolet irradiation would destroy.... Planets discovered so far in about 44 days Earth, and those diamonds are practically dirt cheap, designated... Through a telescope constellation of Cancer, the existence of planets beyond our own ; now we of. Temperature of nearly 4,900 degrees Fahrenheit ( 2,700 degrees Celsius ), 496, 4979 paper arxiv geel dwerg,. Discovered by detecting variations in its planetary system of planets beyond our own now. In a reanalysis of the celestial crab ’ s legs b is of the crab the period 0.74... Exotic planets discovered so far might find the oec_tables repository useful MNRAS,,. Than one third of it is around 40 light years away a has an magnitude! Magnitude 5.95 ) class G ( G8 ) dwarf 41 light years away hoofster, 55 system... Iau announced the winning name was submitted by the Royal Netherlands Association for Meteorology and Astronomy of the shift. X-Ray and ultraviolet irradiation would destroy it a water planet around its star Flamsteed. Interactive gallery of some of the celestial crab lava below extrasolar planets, 55 Cancri b has a of! 55 Cancri 55 cancri mass announced at the time of its discovery, three other planets were orbiting... Super-Earth — about twice our planet 's darkside reflecting the lava below was... This disparity, the existence of planet e was questioned by Jack Wisdom in a reanalysis of the of... And double giant star in the constellation of Cancer, the celestial.... Star is located about 0.781 AU away from Earth the real mass is close the! There is a planet that is partially made of diamonds ( 172 billion km, 108!: Anya Biferno 2002, has a mass 8.3 times the mass of Earth and orbital! Planets were known orbiting the star and takes 260 days to orbit the star 55 Cancri e was questioned Jack. Its Sun-like host star in the orbit of its discovery, 55 cancri mass other planets were known orbiting the Sun-like 55!
|
2021-02-26 13:22:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5800917744636536, "perplexity": 4431.947830423524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357641.32/warc/CC-MAIN-20210226115116-20210226145116-00021.warc.gz"}
|
https://codereview.stackexchange.com/questions/245618/farmer-crossing-as-first-exercise
|
# Farmer crossing as first exercise
As a first step into the world of Alloy, I've done my own version of the Fox/Chicken/Grain problem. I'd be grateful for any comments (also, is there a better place to do this?).
I thought it worth separating out the Farmer from the other items so I don't have to keep removing it from the set of items.
open util/ordering[Time]
sig Time {}
enum Place {Near, Far}
abstract sig Locatable { location: Place one -> Time }
abstract sig Edible extends Locatable {}
one sig Fox, Chicken, Grain extends Edible {}
one sig Farmer extends Locatable {}
pred init(t: Time) { Locatable.location.t = Near }
pred done(t: Time) { Locatable.location.t = Far }
pred stayPut(t, t': Time, edibles: set Edible) {
all e : edibles | e.location.t = e.location.t'
}
pred carryAcross(t, t' : Time) {
one e: Edible {
e.location.t = Farmer.location.t
e.location.t' = Farmer.location.t'
stayPut[t, t', Edible - e]
}
}
pred crossRiver(t, t' : Time) {
stayPut[t, t', Edible]
}
pred nextCrossing(t, t' : Time) {
Farmer.location.t' != Farmer.location.t
carryAcross[t, t'] or crossRiver[t, t']
}
pred eats [a, b: Edible] { a->b in Fox->Chicken + Chicken->Grain }
fact ProtectFromEating {
all p, q: Edible, t: Time |
p.eats[q] and p.location.t = q.location.t => q.location.t = Farmer.location.t
}
fact Traces {
first.init
all t: Time - last | let t' = t.next |
done[t] or nextCrossing [t, t']
}
fact Done {
some t: Time | done[t]
}
run {} for 8 Time
• Welcome to Code Review! Could you please provide more details about the Fox/Chicken/Grain problem? if there is a description from a third party source you are using then feel free to paste the text here (provided that is allowed given licensing), as well as any sample inputs and expected outputs. Jul 17 '20 at 17:42
• Is this really necessary? This appears to be a standard example for people in the Alloy world. I'm finding the entry costs to posting "correctly" to SE beginning to outweigh the benefits. Jul 20 '20 at 8:03
• No it isn't necessary but might help others provide better reviews. After searching the internet I believe I learned about the F/C/G problems when I was younger. Please forgive my naivety. Jul 20 '20 at 21:53
I am personally not a fan of the Time pattern, it gets messy quickly. There is an extension in the works, called electrum that allows you to use variables.
Overall your solution seems too detailed. The magic of alloy is that you only have to say what you want to achieve. I.e. the animals should always be safe and and somehow they need to end up on the other side. I.e. you don't care about the crossing itself that much as long as you ensure there is never a lunch happening.
My personal favorite solution for this problem is:
open util/ordering[Crossing]
enum Object { Farmer, Chicken, Grain, Fox }
let eats = Chicken->Grain + Fox->Chicken
let safe[place] = Farmer in place or (no place.eats & place)
sig Crossing {
near, far : set Object,
carry : Object
} {
near = Object - far
safe[near]
safe[far]
}
run {
first.near = Object
no first.far
all c : Crossing - last, c': c.next {
Farmer in c.near implies {
c'.far = c.far + c.carry + Farmer
} else {
c'.near = c.near + c.carry + Farmer
}
}
some c : Crossing | c.far = Object
} for 8
Peter Kriens @pkriens
• Thanks for this, very helpful. I particularly like the way that the table view shows the traffic so clearly. That looks like an useful design heuristic. So, “Crossing” is actually the result of a Crossing, i.e. a state, rather than a transition, which is different from the events version of the hotel example in the book. Is there any guidance on when to use the singleton approach instead? I don’t know if it matters, but this solution doesn’t have a “lock” for when the solution is complete, so on a 9th step the farmer takes the grain back. Jul 17 '20 at 13:56
• Electrum looks cool, but for my purposes I need to stick with “stable” versions. Jul 17 '20 at 13:56
• "From Pieter Kriens" Who is that? Could you cite properly with a link please. Also are you sure that they'll read your Thank you comment here? Jul 17 '20 at 14:03
• Yes, it is ok to use my comments
– user227732
Jul 17 '20 at 14:08
• @PeterKriens Ah, nice you jumped in ;-). I suspected but wasn't sure, you are the guy who asked to add the alloy tag at CR. Welcome! Jul 17 '20 at 14:15
As said, the Time pattern never made me fall in love with it so I started to experiment. However, I am an outsider in this. My experience is 44 years of software design but I am more or less an autodidact, not been to university. Big chance a lot of people are now cringing behind their email with this solution :-)
The pattern to put all variable state in a trace sig (Crossing here) seems to work very well though. One of the reasons I love it is that it is easy to add 'debug' variables that trace progress. When things don't work out as you think (which is 98% of the time) it is easy to store some more information.
I got the idea when I had been writing some models where I desperately tried to control the variables with quantifications, which usually created dead models (no instance). Like:
all c : Crossing-last, c': c.next {
some carry : Object {
...
}
}
I had one model and then Daniel told me to remove the unnecessary quantifications. At first I had no idea what he was talking about and then it hit me that the complete state space is reachable the trace signature. So you do not need quantifications, the state space is out there, you only need to constrain it to only visit the states you want it to visit. That was a HUGE insight for me, things really fell in place then. Now it seems so rather obvious :-(
With this approach, you design a sig that contains all the state variables and then have predicates that let it transition to the next state properly. (In this case, the single transition predicate is expanded in the trace predicate for conciseness.) This is usually quite straightforward and maps well to an implementation. The transition predicates are then the events, which also maps well to implementations where events are usually methods.
The disadvantage is that it does not provide very nice graphs, all data is mixed up in one sig. This was one of the reasons I added the table view. And I expect it is the reason a lot of people don't like it, a lot of Alloy users put a lot of weight on good looking visualizations :-) I like it but find that most of the problems I use Alloy with are not that suitable for nice visualizations.
I don’t know if it matters, but this solution doesn’t have a “lock” for when the solution is complete, so on a 9th step the farmer takes the grain back.
I did't care but it should be a nice simple exercise for the reader to lock it on the far side. :-) You could also lock the last Crossing to be the solution. Just replace:
some c : Crossing | c.far = Object
with:
last.far = Object
Is there any guidance on when to use the singleton approach instead?
Well, I would not call it a singleton approach but further I am also still struggling to find the best patterns. It is a testimony to Alloy that there are so many good ways to do something, it really is an incredible environment that deserves a lot more attention.
Peter Kriens @pkriens
• If this review was actually done by @PeterKriens (as well as the other one) then he should be the one to post it - that way it can be properly attributed to him (protip: he can gain more reputation that way) Jul 27 '20 at 17:07
• @SᴀᴍOnᴇᴌᴀ I agree with this. These are solid answers and I only want to upvote the person who wrote them. Jul 27 '20 at 17:08
|
2022-01-26 12:10:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36723682284355164, "perplexity": 1791.0864050462199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304947.93/warc/CC-MAIN-20220126101419-20220126131419-00374.warc.gz"}
|
http://blog.computationalcomplexity.org/2005/09/circuit-complexity-and-p-versus-np.html
|
## Tuesday, September 27, 2005
### Circuit Complexity and P versus NP
In 1983, Michael Sipser suggested an approach to separating P and NP.
One way to gain insight into polynomial time would be to study the expressive power of polynomial-sized circuits. Perhaps the P=?NP question will be settled by showing that some problem in NP does not have polynomial-sized circuits. Unfortunately, there are currently no known techniques for establishing significant lower bounds on circuit size for NP problems. The strongest results to date give linear lower bounds, and it does not seem likely that the ideas there can go much beyond that.
Over the next few years, circuit complexity played a central role in theoretical computer science. In 1985, Yao showed parity required exponential-sized constant-depth circuits, greatly strengthening the bounds given by Furst, Saxe and Sipser. Håstad quickly followed with essentially tight bounds.
Shortly after Razborov showed that the clique function requires large monotone circuits. If we could just handle those pesky NOT gates, then we would have proven P≠NP.
Then Razborov (in Russian) and Smolensky showed strong lower bounds for computing the modp function using constant depth circuits with modq gates for distinct primes p and q. These great circuit results kept coming one after another. We could taste P≠NP.
But then it stopped. We still saw many good circuit complexity papers and some beautiful connections between circuit complexity and communication complexity, derandomization and proof complexity. But the march of great circuit results toward P≠NP hit a wall after the 1987 Razborov-Smolensky papers. As far as we know today, NP still could have linear-sized circuits and NEXP could have polynomial-sized constant-depth circuits with Mod6 gates.
Boppana and Sipser wrote a wonderful survey on these results in the Handbook of Theoretical Computer Science The survey remains surprisingly up to date.
1. I am surprised that you call the Boppana--Sipser survey up to date. I thought circuit complexity was more or less dead (or at least got a massive barrier to entry) after the Natural Proofs work of Razborov and Rudich, and this work appeared in 1994, well after the survey was written.
Siva
2. It amazes me that people seem sort of vaguely disinterested in Razborov-Rudich. The proof doesn't strike me as any more complicate than say Blum-Micali-Yao. And it seems like it should give complexity theory some direction and something to grapple with. But instead people just seem to sort of not care.
3. Isn't the question of lower bounds still interesting for algebraic circuits? Or does some Razborov-Rudich type obstruction exist for that situation too?
Rahul.
4. For me, Razborov-Rudich is a fascinating example of "mining algorithms from a proof." Some logicians talk about this, for example
Kohlebach
and Oliva.
From what little I have seen, however, that work has not concentrated on discrete math and TCS. The Natural Proofs work,
in contrast, is a direct and
surprising application of that idea.
I suspect that the two communities
developed the idea independently,
but I don't know.
In my case, I do care, but at the
same time it seems hard to think
of any proof technique that does
not yield an efficient algorithm
yet is also good for separating
complexity classes.
You are left with counting
arguments of various types or diagonalization.
For a while I was
interested in Ehrenfeuct-Fraisse games, as there is a result showing that deciding the winner of such a game is PSPACE-complete in general. Therefore an algorithm mined from such a proof might be too expensive to act as an efficient distinguisher for a pseudo-random function. Unfortunately, that doesn't mean that the game used to separate a particular pair of complexity classes is hard to decide...so it's not clear what this means. Perhaps you could find a new proof of a result established via an EF game, but "pump up" the complexity of the game used for the separation?
Here is another question: is there any known proof technique which is both not natural and also does not relativize?
5. On my understanding, Fortnow & co.'s method of 'arithmetization' is both non-natural and nonrelativizing:
http://citeseer.ist.psu.edu/
babai91arithmetization.html
Basically, and with variations, the prover interpolates a low-degree polynomial through a boolean function, then runs an interactive proof to demonstrate a value or identity involving the polynomial. It works roughly because different low-degree polynomials are very different, and this difference is probabilistically detectable by the verifier.
It's nonrelativizing because the natural complete problems one can arithmetize (e.g. 3-SAT) are no longer complete under relativizations--their vocabulary fails to capture the dependence of a nondeterministic machine's behavior on exponentially many oracle bits. One can't hope to fix the technique because of Chang et al.'s result that IP^A doesn't contain coNP^A for random A; this is proved by standard techniques w/o reference to interpolation.
It's not 'natural' because it's just not a Razborov-Rudich hardness test for circuits (in any form I understand, at least). Its main thrust is to prove surprising class *equalities*. However, there is at least one nonrelativizing circuit lower bound that follows, see Burhman-Fortnow-Thierauf,
http://citeseer.ist.psu.edu/10573.html
How is this done? (The latter paper is short, so see for yourself..) Basically it's the same 'a bridge too far' method that Fortnow argues makes diagonalization a still-viable tool: class collapses combine powerfully with each other to produce further collapse; arithmetization is a collapsing technique; the assumed collapse one is trying to disprove then causes so much collapse that you contradict the classical (relativizable) hierarchy theorems.
As I see it, there's no particular reason to expect this particular technique to solve all complexity questions. To argue this, you might construct two oracle worlds that share the known equality results garnered by arithmetization yet differ as to whether e.g. P = NP.
6. Are there any known results for circuits which only consist of xor gates?
Consider the following problem: we have n inputs, and n = p * p for some prime p. Each input can be specified as (x, y) for x, y in [0, p). We also have n outputs, referred to in the same way. The value of the (x, y) output is equal to the xor of each input (m, x + y * m) for m in [0, p).
There's no pair of inputs which are xored for two different outputs for this problem, so it's 'obvious' that the smallest number of xor gates which can be used to obtain the result is to simply do all the xors for each one, which leads to circuit complexity just under n * sqrt(n). It's also 'obvious' that using non-xor gates can't help compute the values any faster.
Clearly that second conjecture hasn't been proven, because it result in a superlinear circuit complexity. It seems like the first conjecture should be easy to prove though, but I can't get anywhere with it.
-Bram
7. Bram, I'm unsure of how to interpret your problem. Try to state it more formally. Especially worrisome, what is xor of numbers in Z^p? Do you mean addition mod p?
Also: be aware that if your input is n pairs of elements of [0, p), the strict input 'size' is bit-encoding size, i.e. ~2n*log(p) = 2p^2log(p). Your conjectured circuit lower bound, to be interesting, has to be superlinear in *this*, not just in n.
Finally, your output is more than a single bit; it's not a decision problem (which are more commonly studied), more is required. So proving you need a big circuit might be easier. But ask yourself if the individual output bits seem nearly as hard to produce; if this is the case you might prefer to concentrate on the problem of computing one of them.
If the complexity seems instead to come from there being many outputs, the issue is those outputs being relatively 'orthogonal' as mathematical entities--combining their computations doesn't give significant savings. I'm not aware of natural problems for which this is known in a strong way, and I'm not sure your problem is the one to achieve it since the bits seem integrally related.
Good luck.
P.S. for a classic problem that shows how problems can yield savings when computed together, try to write a prog to find both the max and min of n numbers with many fewer comparisons than just superimposing a max-search and a min-search.
8. I don't think this problem should be easy at all, even for linear circuits, although it looks interesting.
If I understand correctly the inputs are $n=p*p$ single bits (just ordered on the p*p grid)
and the (x,y)^th output is the XOR of all the bits that reside on the line specified by x and y.
Thus if (a,b) and (a',b') are two points then indeed they are contained in only one line. (Although I am pretty sure this property alone won't suffice for a lower bound)
Each individual bit is certainly easy here to produce as it takes \sqrt{n} XORs, so indeed any proof of this form is some kind of direct product construction.
9. Thanks, that makes perfect sense. Cool problem.
10. Yeah, there are n outputs, each of which requires sqrt(n) to compute individually, and my conjecture is basically that computing them is completely orthogonal.
-Bram
|
2016-07-24 18:28:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6911364793777466, "perplexity": 1041.991360995235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824133.26/warc/CC-MAIN-20160723071024-00051-ip-10-185-27-174.ec2.internal.warc.gz"}
|
http://events.berkeley.edu/?event_ID=124544&date=2019-03-11&tab=academic
|
## String-Math Seminar: HOMFLY-PT link homology from a stack of D2-branes.
Seminar | March 11 | 2-3 p.m. | 402 LeConte Hall
Lev Rozansky, University of North Carolina (Chapel Hill)
Department of Mathematics
This is a joint work with A. Oblomkov exploring the relation between the HOMFLY-PT link homology and coherent sheaves over the Hilbert scheme of points on $\mathbb C^2$.
We consider a special object in the 2-category related to the Hilbert scheme of n points on $\mathbb C^2$. We define a homomorphism from the braid group on n strands to the monoidal category of endomorphisms of this object. We prove that the space of morphisms between the images of a braid and of the identity braid is the invariant of a link constructed by closing the braid. Conjecturally, this space is the triply-graded HOMFLY-PT homology.
From the TQFT point of view, we consider a B-twisted $3d$ $N=4$ SUSY YM with matter, whose Higgs branch is the Hilbert scheme.
Link homology appears as the Hilbert space of a 2-disk. Its boundary carries a flag variety-based sigma model, and the Kahler parameters of the flag variety braid as one goes around the disk.
From the IIA string theory point of view, the points on $\mathbb C^2$ are BPS particles coming from a 2-disk shaped stack of n D2-branes located in one of the fibers at the North Pole of $\mathbb P^1$, which forms the base of a resolved conifold. The D2-branes end on a stack of NS5 branes which form a closed braid in the other fiber.
artamonov@berkeley.edu
|
2019-11-22 23:24:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6768941283226013, "perplexity": 528.1380152762366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496672170.93/warc/CC-MAIN-20191122222322-20191123011322-00397.warc.gz"}
|
http://mathcentral.uregina.ca/QQ/database/QQ.09.15/h/jennifer1.html
|
SEARCH HOME
Math Central Quandaries & Queries
Question from Jennifer, a student: I want to know how to get the right answer for this math problem, dividing a fraction in to a whole number , example 3/8 using the whole number 6? Thanks
Hi Jennifer,
Division is the inverse of multiplication so you can often think of a division problem by way of a multiplication problem. For example you know that
$\frac63$
is 2 because
$3 \times 2 = 6.$
Think of your problem the same way. What is the number x so that
$\frac{6}{3/8} = x$
is equivalent to asking what is the number $x$ so that
$6 = x \times \frac{3}{8}?$
Now you can see that if you multiply both sides by $\large \frac83$ you get
$x = 6 \times \frac{8}{3}.$
The same technique works if you have to divide a fraction by a fraction.
Penny
Math Central is supported by the University of Regina and the Imperial Oil Foundation.
|
2017-11-22 05:32:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7935001850128174, "perplexity": 329.42454365675815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806465.90/warc/CC-MAIN-20171122050455-20171122070455-00725.warc.gz"}
|
http://viola-gold.ru/tutorials/basic-statistics/sampling-errors.html
|
# Sampling Errors
Suppose we are interested in the value of a population parameter, the true value of which is $\theta$ but is unknown. The knowledge about $\theta$ can be obtained either from sample data or from population data. In both cases, there is a possibility of not reaching the true value of the parameter. The difference between the calculated value (from the sample data or from population data) and the true value of the parameter is called an error.
Thus, error is something which cannot be determined accurately if the population is large and the units of the population are to be measured. Suppose we are interested in finding the total production of wheat in Pakistan in a certain year. Sufficient funds and time are at our disposal and we want to get the ‘true’ figure of the production of wheat. The maximum we can do is contact all the farmers, and suppose all the farmers cooperate completely and supply the information as honestly as possible. But the information supplied by the farmers will have errors in most cases, so we may not be able to identify the ‘true’ figure. In spite of all efforts, we shall be in the dark.
The calculated or observed figure may be good for all practical purposes but we can never claim that a true value of the parameter has been obtained. If the study of the units is based on counting,we can possibly get the true figure of the population parameter. There are two kinds of errors, (i) sampling errors or random errors and (ii) non-sampling errors.
Sampling Errors
Sampling errors occur due to the nature of sampling. The sample selected from the population is one of all possible samples. Any value calculated from the sample is based on the sample data and is called a sample statistic. The sample statistic may or may not be close to the population parameter. If the statistic is $\widehat \theta$ and the true value of the population parameter is $\theta$, then the difference $\widehat \theta - \theta$ is called the sampling error. It is important to note that a statistic is a random variable and it may take any value.
A particular example of sampling error is the difference between the sample mean $\overline X$ and the population mean $\mu$. Thus sampling error is also a random term. The population parameter is usually not known; therefore the sampling error is estimated from the sample data. The sampling error is due to the fact that a certain part of the population is incorporated in the sample. Obviously, one part of the population cannot give the true picture of the properties of the population. But one should not get the impression that a sample always gives a result which is full of errors. We can design a sample and collect sample data in a manner so that sampling errors are reduced. Sampling errors can be reduced by the following methods: (1) by increasing the size of the sample (2) by stratification.
Reducing Sampling Errors
1. Increasing the size of the sample: The sampling error can be reduced by increasing the sample size. If the sample size n is equal to the population size $N$, then the sampling error is zero.
2. Stratification: When the population contains homogeneous units, a simple random sample is likely to be representative of the population. But if the population contains dissimilar units, a simple random sample may fail to be representative of all kinds of units in the population. To improve the result of the sample, the sample design is modified. The population is divided into different groups containing similar units, and these groups are called strata. From each group (stratum), a sub-sample is selected in a random manner. Thus all groups are represented in the sample and the sampling error is reduced. This method is called stratified-random sampling. The size of the sub-sample from each stratum is frequently in proportion to the size of the stratum.
Suppose a population consists of 1000 students, out of which 600 are intelligent and 400 are unintelligent. We are assuming here that we do have much information about the population. A stratified sample of size $n =$100 is to be selected. The size of the stratum is denoted by ${N_1}$ and ${N_2}$ respectively, and the size of the samples from each stratum may be denoted by ${n_1}$ and ${n_2}$. It is written as:
Stratum # Size of stratum Size of sample from each stratum 1 ${N_1} = 600$ ${n_1} = \frac{{n \times {N_1}}}{N} = \frac{{100 \times 600}}{{1000}} = 60$ 2 ${N_2} = 400$ ${n_2} = \frac{{n \times {N_2}}}{N} = \frac{{100 \times 400}}{{1000}} = 40$ ${N_1} + {N_2} = N = 1000$ ${n_1} + {n_2} = n = 100$
The size of the sample from each stratum has been calculated according to the size of the stratum. This is called proportional allocation. In the above sample design, the sampling fraction in the population is $\frac{n}{N} = \frac{{100}}{{1000}} = \frac{1}{{10}}$ and the sampling fraction in both the strata is also $\frac{1}{{10}}$. Thus this design is also called a fixed sampling fraction. This modified sample & sign is frequently used in sample surveys. But this design requires some prior information about the units of the population, and the population is divided into different strata based on this information. If the prior information is not available then the stratification is not applicable.
|
2018-01-20 01:06:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 21, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 21, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.76819908618927, "perplexity": 209.47368679956517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888341.28/warc/CC-MAIN-20180120004001-20180120024001-00016.warc.gz"}
|
https://www.physicsforums.com/threads/hartree-fock-exchange-operator.88892/
|
# Hartree-Fock exchange operator
1. ### cire
0
I'm trying to understand the Hartree-Fock mathematical formulation I understand the Coulomb operator, but I dont understand the exchange operator:
$$\hat{K_{j}}[\Psi](\textbf{x})=\Phi_{j}(\textbf{x})\int d\textbf{x}'\frac{\Phi_{j}^{*}(\textbf{x}')\Psi(\textbf{x}')}{|\textbf{r}-\textbf{r}'|}$$
Can any one explain me why this operator is like this. I understand that it is the interaction of the j-th electron with the electrons' cloud but... how it come to be like that
thanks in advance
2. ### Berislav
243
I don't know anything about that formulation, but this book on Google Print might help:
Google Print
You can't see all the relevant information, but I think it might help you understand where the idea's headed.
Last edited: Sep 13, 2005
3. ### lalbatros
I guess the integral on the rhs is simply the matrix element <j|K|phi> of the electrostatic potential. Therefore K|phi> is indeed given by |j><j|K|ph> .
Know someone interested in this topic? Share a link to this question via email, Google+, Twitter, or Facebook
Have something to add?
|
2015-04-21 11:41:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.848396897315979, "perplexity": 1391.2866623708237}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246641393.36/warc/CC-MAIN-20150417045721-00141-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://www.rdocumentation.org/packages/tibble/versions/1.3.4/topics/rownames
|
# rownames
0th
Percentile
##### Tools for working with row names
While a tibble can have row names (e.g., when converting from a regular data frame), they are removed when subsetting with the [ operator. A warning will be raised when attempting to assign non-NULL row names to a tibble. Generally, it is best to avoid row names, because they are basically a character column with different semantics to every other column. These functions allow to you detect if a data frame has row names (has_rownames()), remove them (remove_rownames()), or convert them back-and-forth between an explicit column (rownames_to_column() and column_to_rownames()). Also included is rowid_to_column() which adds a column at the start of the dataframe of ascending sequential row ids starting at 1. Note that this will remove any existing row names.
##### Usage
has_rownames(df)remove_rownames(df)rownames_to_column(df, var = "rowname")rowid_to_column(df, var = "rowid")column_to_rownames(df, var = "rowname")
##### Arguments
df
A data frame
var
Name of column to use for rownames.
##### Details
In the printed output, the presence of row names is indicated by a star just above the row numbers.
##### Aliases
• rownames
• has_rownames
• remove_rownames
• rownames_to_column
• rowid_to_column
• column_to_rownames
##### Examples
# NOT RUN {
has_rownames(mtcars)
has_rownames(iris)
has_rownames(remove_rownames(mtcars))
|
2018-04-22 14:34:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18934930860996246, "perplexity": 4997.494404889123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945604.91/warc/CC-MAIN-20180422135010-20180422155010-00603.warc.gz"}
|
https://tkhan11.github.io/blog/pca.html
|
# Principal Component Analysis (PCA)
Principal component analysis (PCA) in its typical form implicitly assumes that the observed data matrix follows a Gaussian distribution. However, PCA can be generalized to allow for other distributions – here, we take a look at its generalization for exponential families introduced by Collins et al. in 2001.
## The exponential family
The exponential family of distributions plays a major role in statistical theory and practical modeling. A large reason for this is that the family allows for many nice closed-form results and asymptotic guarantees for performing estimation and inference. Additionally, the family is fairly diverse and can model lots of different types of data.
Recall the basic form of an exponential family density (here, using Wikipedia’s notation):
$f(x | \theta) = h(x) \exp\left\{ \eta(\theta) T(x) - A(\theta) \right\}.$
Here, $T(x)$ is a sufficient statistic, $\eta(\theta)$ is the “natural parameter”, $A(\theta)$ is a normalizing factor that makes the distribution sum to $1$, and $h(x)$ is the base measure. The form of $A(\theta)$ is determined automatically once the other functions have been determined. Its form can perhaps more easily seen by writing
$f(x | \theta) = \frac{h(x) \exp\left\{ \eta(\theta) T(x) \right\}}{\exp\{A(\theta)\}}.$
Thus, enforcing $f$ sum to $1$, we must have that
$\sum\limits_{x \in \mathcal{X}} \frac{h(x) \exp\left\{ \eta(\theta) T(x) \right\}}{\exp\{A(\theta)\}} = \frac{1}{\exp\{A(\theta)\}}\sum\limits_{x \in \mathcal{X}} h(x) \exp\left\{ \eta(\theta) T(x) \right\} = 1.$
Rearranging, we have
$\exp\{A(\theta)\} = \sum\limits_{x \in \mathcal{X}} h(x) \exp\left\{ \eta(\theta) T(x) \right\}$
or, equivalently,
$A(\theta) = \log \left[ \sum\limits_{x \in \mathcal{X}} h(x) \exp\left\{ \eta(\theta) T(x) \right\} \right].$
In canonical form, we have $\eta(\theta) = \theta$, and it is also often the case that $T(x) = x$. In this case, the form simplifies to
$A(\theta) = \log \sum\limits_{x \in \mathcal{X}} h(x) \exp\left\{ \theta x \right\}.$
Thus, by expanding $A$, we can see that $f$ can also be written as
$f(x | \theta) = \frac{h(x) \exp\left\{ \theta x \right\}}{ \sum\limits_{x \in \mathcal{X}} h(x) \exp\left\{ \theta x \right\}}.$
An important property of $A(\theta)$ is that its first derivative with respect to $\theta$ is equal to the expectation of $f$:
\begin{align} A’(\theta) &= \sum\limits_{x \in \mathcal{X}} x \frac{h(x) \exp\left\{ \theta x \right\}} {\sum\limits_{x \in \mathcal{X}} h(x) \exp\left\{ \theta x \right\}} \\ &= \sum\limits_{x \in \mathcal{X}} x f(x | \theta) \\ &= \mathbb{E}_{f(x)}[x | \theta] \end{align}
## Generalized linear models
In the setting of (generalized) linear models, we have a design matrix $\mathbf{X} \in \mathbb{R}^{n \times p}$ and a vector of response variables $\mathbf{Y} \in \mathbb{R}^n$, and we’re interested in in finding a linear relationship between them. Often we assume that the conditional density of $\mathbf{Y} | \mathbf{X}$ is in the exponential family.
Modeling a linear relationship directly as $\mathbb{E}[\mathbf{Y} | \mathbf{X}] = \mathbf{X} \boldsymbol{\beta}$ for a parameter vector $\boldsymbol{\beta} \in \mathbb{R}^p$ implicitly assumes Gaussian errors when a mean-squared error loss is used. However, to accommodate non-Gaussian distributions, we can transform the expected value with a “link function” $g$. Denote $\mu(x) = \mathbb{E}[\mathbf{Y} | \mathbf{X}]$. Then we say
$g(\mu(x)) = \mathbf{X} \boldsymbol{\beta} \iff \mu(x) = g^{-1}(\mathbf{X} \boldsymbol{\beta}).$
Recall that $A’(\theta) = \mu(x)$, so we have
\begin{align} &\mu(x) = g^{-1}(\mathbf{X} \boldsymbol{\beta}) = A’(\theta) \\ \implies &\theta = (A’ \circ g)^{-1}(\mathbf{X} \boldsymbol{\beta}) \end{align}
where $\circ$ denotes a composition of the two functions.
A “canonical” link function is defined as $g = (A’)^{-1}$, and in this case we have $\theta = (A’ \circ (A’)^{-1})^{-1}(\mathbf{X} \boldsymbol{\beta}) = \mathbf{X} \boldsymbol{\beta}$.
For simplicity (and practical relevance), the rest of this post assumes the use of a canonical link function. To fit a GLM, we can write down the likelihood of the parameters $\boldsymbol{\beta}$ given the data $\mathbf{X}, \mathbf{Y}$, and maximize that likelihood using standard optimization methods (gradient descent, Newton’s method, Fisher scoring, etc.). The likelihood for a sample $Y_1, \dots, Y_n$ and $X_1, \dots, X_n$ looks like:
\begin{align} L &= \prod\limits_{i=1}^n h(Y_i) \exp\left\{ \theta Y_i - A(\theta) \right\} \\ &= \prod\limits_{i=1}^n h(Y_i) \exp\left\{ X_i \beta Y_i - A(X_i \beta) \right\} \\ \end{align}
The log-likelihood is
$\log L = \sum\limits_{i=1}^n \left[ \log h(Y_i) + X_i \beta Y_i - A(X_i \beta)\right].$
We can then maximize this with respect to $\beta$, which effectively amounts to maximizing $\sum\limits_{i=1}^n \left[ X_i \beta Y_i - A(X_i \beta) \right]$ because the term $\log h(Y_i)$ is constant with respect to $\beta$.
## From GLMs to PCA
Now, suppose we have a data matrix $\mathbf{X} \in \mathbb{R}^{n \times p}$, and instead of finding a relationship with some response vector, we’d like to understand the patterns of variation within $\mathbf{X}$ alone. We can use a similar modeling approach as that of GLMs (where we assume a linear relationships), but now we model $\mathbb{E}[\mathbf{X}] = \mathbf{A}\mathbf{V}$, where $\mathbf{A}$ and $\mathbf{V}$ are two lower-rank matrices ($\mathbf{A} \in \mathbb{R}^{n \times k}$, and $\mathbf{V} \in \mathbb{R}^{k \times p}$, where $k < p$), neither of which is observed. Notice that now both our “design matrix” and our parameter vector are unknown, unlike in the case of GLMs where we observed a design matrix.
This is precisely the approach taken in a paper by Collins et al. called “A Generalization of Principal Component Analysis to the Exponential Family”. The rest of this post explores the details of this paper.
To be more precise, let $\Theta \in \mathbb{R}^{n \times p}$ be a matrix of canonical parameters. We are now trying to find $\mathbf{A}$ and $\mathbf{V}$ to make the approximation
$\Theta \approx \mathbf{A} \mathbf{V}$
where $\mathbf{A} \in \mathbb{R}^{n \times k}$ and $\mathbf{V} \in \mathbb{R}^{k \times p}$. Assume $k=1$ for now, again for simplicity. In this case, $\mathbf{A}$ and $\mathbf{V}$ are vectors, so let’s denote them as $\mathbf{a}$ and $\mathbf{v}$. Using the exponential family form above, the likelihood is then
\begin{align} \log L(\mathbf{a}, \mathbf{v}) &= \sum\limits_{i = 1}^n \sum\limits_{j = 1}^p \left(\theta_{ij} x_{ij} - A(\theta_{ij}) \right) \\ &= \sum\limits_{i = 1}^n \sum\limits_{j = 1}^p \left( a_i v_j x_{ij} - A(a_i v_j) \right) \\ \end{align}
where $a_i$ is the $i$th element of $\mathbf{a}$, and $v_j$ is the $j$th element of $\mathbf{v}$.
Now we can maximize this likelihood with respect to $\mathbf{A}$ and $\mathbf{V}$. A common way to do this is via alternating minimization of the negative log-likelihood (which is equivalent to maximizing the log-likelihood), which proceeds as:
1. Fix $\mathbf{v}$, and minimize the negative log-likelihood w.r.t. $\mathbf{a}$.
2. Fix $\mathbf{a}$, and minimize the negative log-likelihood w.r.t. $\mathbf{v}$.
3. Repeat Steps 1-2 until convergence.
It turns out that these minimization problems are convex in $\mathbf{a}$ and $\mathbf{v}$ individually (while the other one is held fixed), but not convex in both $\mathbf{a}$ and $\mathbf{v}$ simultaneously.
One way to view this problem is as $n + p$ GLM regression problems, where each regression problem has one parameter. For example, when fitting the $i$th element of $\mathbf{a}$, $a_i$ while $\mathbf{v}$ is fixed, we have the following approximation problem:
$\Theta_i \approx a_i \mathbf{v}$
where $\Theta_i$ is the $i$th row of $\Theta$.
This leads to the following log-likelihood:
$\log L(\mathbf{a}_i) = \sum\limits_{j=1}^p \left[a_i v_j \mathbf{X}_{ij} - A(a_i v_j)\right]$
where the subscript $j$ indicates the $j$th element of a vector.
We can see a correspondence to the typical univariate GLM setting in which we have observed data:
\begin{align} a_i &\iff \boldsymbol{\beta}\; \text{(parameter)} \\ \mathbf{v} &\iff \mathbf{X} \; \text{(Design matrix)} \\ \mathbf{X}_i &\iff \mathbf{Y} \; \text{(Response vector)} \end{align}
where $\mathbf{X}_i \in \mathbb{R}^p$ is the $i$th row of $\mathbf{X}$. A similar correspondence exists when we fit $\mathbf{v}_j$ while holding $\mathbf{A}$ fixed.
## Examples
### Gaussian (PCA)
If we assume Gaussian observations with fixed unit variance, then the only parameter is the mean $\mu$. In this case have $A(\theta) = \frac{1}{2} \theta^2 = \frac12 \mu^2$. We also have $A’(\theta) = \mu$, which is indeed the expected value. The log-likelihood of the PCA model is
$\log L(\mathbf{a}, \mathbf{v}) = \sum\limits_{i = 1}^n \sum\limits_{j = 1}^p \left[ (a_i v_j x_{ij}) - \frac12 (a_i v_j)^2 \right]$
Minimizing the negative log-likelihood is also equivalent to minimizing the mean-squared error:
$\log L(\mathbf{a}, \mathbf{v}) = \frac12 ||\mathbf{X} - \mathbf{a}^\top \mathbf{v}||_2^2$
Again, realizing that that this problem decomposes into $n + p$ regression problems, we can solve for the updates for $\mathbf{a}$ and $\mathbf{v}$. The minimization problem for $a_i$ is
\begin{align} \min_{a_i} \sum\limits_{j=1}^p \left[ -a_i v_j x_{ij} + \frac12 (a_i v_j)^2 \right] \end{align}
In vector form for $\mathbf{a}$, we have
\begin{align} \min_{\mathbf{a}} \frac12 ||\mathbf{X} - \mathbf{V} \mathbf{a}^\top||_2^2 \end{align}
Of course, this has the typical least squares solution. Here’s a quick derivation for completeness:
$\nabla_{\mathbf{a}} \log L = \mathbf{v}^\top \mathbf{X} - \mathbf{v}^\top \mathbf{v} \mathbf{a}$
Equating this gradient to 0, we have
\begin{align} &\mathbf{v}^\top \mathbf{X} - \mathbf{v}^\top \mathbf{v} \mathbf{a} = 0 \\ \implies& \mathbf{a} = (\mathbf{v}^\top \mathbf{v})^{-1} \mathbf{v}^\top \mathbf{X} \end{align}
Since $\mathbf{v}$ is a vector, this simplifies to
$\mathbf{a} = \frac{\mathbf{v}^\top \mathbf{X}}{||\mathbf{v}||_2^2}$
Similarly, the update for $\mathbf{v}$ is
$\mathbf{v} = (\mathbf{a}^\top \mathbf{a})^{-1} \mathbf{a}^\top \mathbf{X} = \frac{\mathbf{a}^\top \mathbf{X}}{||\mathbf{a}||_2^2}.$
So the alternating least squares algorithm for Gaussian PCA is
1. Update $\mathbf{a}$ as $\mathbf{a} = \frac{\mathbf{v}^\top \mathbf{X}_i}{||\mathbf{v}||_2^2}$.
2. Update $\mathbf{v}$ as $\frac{\mathbf{a}^\top \mathbf{X}_i}{||\mathbf{a}||_2^2}$.
3. Repeat Steps 1-2 until convergence.
Of course, the Guassian case is particularly nice in that it has an analytical solution for the minimum each time. This isn’t the case in general – let’s see a (slightly) more complex example below.
### Bernoulli
With a Bernoulli likelihood, each individual regression problem now boils down to logistic regression. The normalizing function in this case is $A(\theta) = \log\left\{ 1 + \exp(\theta)\right\}$. Thus, the optimization problem for $\mathbf{a}_i$ is
$\min_{a_i} \sum\limits_{j=1}^p \left[ -(a_i v_j) \mathbf{X}_{ij} + \log(1 + \exp(a_i v_j)) \right].$
Of course, there’s no analytical solution in this case, so we can resort to iterative optimization methods. For example, to perform gradient descent, the gradient is
$\nabla_{a_i} \left[- \log L\right] = \sum\limits_{j = 1}^p \left(- \mathbf{X}_{ij} + \frac{\exp(a_i v_j)}{1 + \exp(a_i v_j)}\right) v_j.$
Similarly the gradient for $v_j$ is
$\nabla_{v_j} \left[- \log L\right] = \nabla_{\mathbf{v}_i} \left[- \log L\right] = \sum\limits_{i = 1}^n \left(- \mathbf{X}_{ij} + \frac{\exp(a_i v_j)}{1 + \exp(a_i v_j)}\right) v_j.$
Then, for some learning rate $\alpha$, we could run
1. For $i \in [n]$, update $a_i$ as $a_i = a_i - \alpha \nabla_{a_i} \left[- \log L\right]$.
2. For $j \in [p]$, update $v_j$ as $v_j = v_j - \alpha \nabla_{v_j} \left[- \log L\right]$.
3. Repeat Steps 1-2 until convergence.
## Simulations
Here’s some simple code for performing alternating least squares with Guassian data as desribed above. Recall that we’re minimizing the negative log-likelihood here, which is equivalent to maximizing the log-likelihood.
import numpy as np
import matplotlib.pyplot as plt
n = 40
p = 2
k = 1
A_true = np.random.normal(size=(n, k))
V_true = np.random.normal(size=(k, p))
X = np.matmul(A_true, V_true) + np.random.normal(size=(n, p))
X -= np.mean(X, axis=0)
num_iters = 100
A = np.random.normal(loc=0, scale=1, size=(n, k))
V = np.random.normal(loc=0, scale=1, size=(k, p))
for iter_num in range(num_iters):
A = np.matmul(X, V.T) / np.linalg.norm(V, ord=2)**2
V = np.matmul(A.T, X) / np.linalg.norm(A, ord=2)**2
And here’s a plot of the projected data onto the inferred line (here, plotting the “reconstruction” of $\mathbf{X}$ from its inferred components).
Here’s code for doing “logistic PCA”, or PCA with bernoulli data. I used Autograd to easily compute the gradients of the likelihood here.
import matplotlib.pyplot as plt
n = 100
p = 2
k = 1
A_true = np.random.normal(size=(n, k))
V_true = np.random.normal(size=(k, p))
X_probs = 1 / (1 + np.exp(-np.matmul(A_true, V_true)))
X = np.random.binomial(n=1, p=X_probs)
return 1 / (1 + np.exp(-x))
def bernoulli_likelihood_A(curr_A):
return -np.sum(np.log(np.matmul(X.T, X))) - np.sum(np.multiply(np.matmul(curr_A, V), X)) + np.sum(np.log(1 + np.exp(np.matmul(curr_A, V))))
def bernoulli_likelihood_V(curr_V):
return -np.sum(np.log(np.matmul(X.T, X))) - np.sum(np.multiply(np.matmul(A, curr_V), X)) + np.sum(np.log(1 + np.exp(np.matmul(A, curr_V))))
num_iters = 500
A = np.random.normal(loc=0, scale=1, size=(n, k))
V = np.random.normal(loc=0, scale=1, size=(k, p))
learning_rate = 1e-2
for iter_num in range(num_iters):
|
2023-04-02 03:29:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 8, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.998810887336731, "perplexity": 755.110685937144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950373.88/warc/CC-MAIN-20230402012805-20230402042805-00186.warc.gz"}
|
https://whystartat.xyz/index.php?title=Talk:Modular_equivalence&oldid=137
|
# Talk:Modular equivalence
Mod notation can be bulky and counter-intuitive. Here is an alternate mod notation that simply removes the middle line from the equivalence sign, places the modulus inside and drops the now moot 'mod'.
The following macros are defined for the Latex that follows and the second graphic shows the result:
Examples: In lieu of $x\equiv y \mod{z}$, we have $x\bm{z}y$ Rather than take both sides $\mod{m}$, we would take both sides $\bm{m}$ Taking all $m\bm{7}1\in \mathbb{Z}_{+}$ gives $\{1,8,15,\dots\}$ A non-equivalence can be written $9\bmn{4}3$ It can be chained ala $27\bm{11}5\bm{3}2$ and even be adapted to carry quotient information as in $77\bmq{13}{5}12$.
Here's how that looks when rendered:
|
2022-12-03 21:38:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9798195362091064, "perplexity": 1130.3822331232102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710941.43/warc/CC-MAIN-20221203212026-20221204002026-00364.warc.gz"}
|
https://tlamadon.github.io/rblm/reference/grouping.classify.once.html
|
clusters firms based on their cross-sectional wage distributions
grouping.classify.once(measures, k = 10, nstart = 1000, iter.max = 200,
step = 20)
## Arguments
measures object created using grouping.getMeasures number of groups (default:1000) total number of starting values (default:100) max number of step for each repetition step size in the repeating cross sectional data, needs a column j (firm id) and w (log wage) number of points to use for wage distribution
|
2022-12-03 04:42:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3612304925918579, "perplexity": 7044.450126341604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710924.83/warc/CC-MAIN-20221203043643-20221203073643-00061.warc.gz"}
|
https://rupress.org/jgp/article/120/1/1/54713/Current-dependent-Block-of-Rabbit-Sino-Atrial-Node
|
“Funny” (f-) channels have a key role in generation of spontaneous activity of pacemaker cells and mediate autonomic control of cardiac rate; f-channels and the related neuronal h-channels are composed of hyperpolarization-activated, cyclic nucleotide–gated (HCN) channel subunits. We have investigated the block of f-channels of rabbit cardiac sino-atrial node cells by ivabradine, a novel heart rate-reducing agent. Ivabradine is an open-channel blocker; however, block is exerted preferentially when channels deactivate on depolarization, and is relieved by long hyperpolarizing steps. These features give rise to use-dependent behavior. In this, the action of ivabradine on f-channels is similar to that reported of other rate-reducing agents such as UL-FS49 and ZD7288. However, other features of ivabradine-induced block are peculiar and do not comply with the hypothesis that the voltage-dependence of block is entirely attributable to either the sensitivity of ivabradine-charged molecules to the electrical field in the channel pore, or to differential affinity to different channel states, as has been proposed for UL-FS49 (DiFrancesco, D. 1994. Pflugers Arch. 427:64–70) and ZD7288 (Shin, S.K., B.S. Rotheberg, and G. Yellen. 2001. J. Gen. Physiol. 117:91–101), respectively. Experiments where current flows through channels is modified without changing membrane voltage reveal that the ivabradine block depends on the current driving force, rather than voltage alone, a feature typical of block induced in inwardly rectifying K+ channels by intracellular cations. Bound drug molecules do not detach from the binding site in the absence of inward current through channels, even if channels are open and the drug is therefore not “trapped” by closed gates. Our data suggest that permeation through f-channel pores occurs according to a multiion, single-file mechanism, and that block/unblock by ivabradine is coupled to ionic flow. The use-dependence resulting from specific features of If block by ivabradine amplifies its rate-reducing ability at high spontaneous rates and may be useful to clinical applications.
## Introduction
The “funny” current, or “pacemaker” (If) current, was first described in cardiac pacemaker cells of the mammalian sino-atrial node (SAN)* as a current slowly activating on hyperpolarization at voltages in the diastolic depolarization range, and contributing essentially to generation of cardiac rhythmic activity and its control by sympathetic innervation (Brown et al., 1979; Brown and DiFrancesco, 1980; Yanagihara and Irisawa, 1980). A pacemaker current had been described previously in another cardiac preparation able to pace spontaneously, the Purkinje fiber, where it had been erroneously interpreted as a pure K+ current (current IK2: Noble and Tsien, 1968). Soon after the discovery of If, a reinterpretation of the nature of IK2 led to the demonstration that in this preparation, too, the pacemaker current was the same as in the SAN (DiFrancesco, 1981a,b). If-type hyperpolarization-activated currents were subsequently described in other cardiac regions and in a variety of neuronal cells (for reviews see DiFrancesco, 1993; Pape, 1996).
Further investigation in SAN and other types of cells revealed the specific properties of If. If is a current carried by both Na+ and K+, inward in the pacemaker range of voltages, activated on hyperpolarization from a threshold of about −40/−50 mV and fully activated at about −100/−110 mV (DiFrancesco, 1981b; DiFrancesco et al., 1986). f-channels are modulated by intracellular cAMP by an action involving direct cAMP binding to channel proteins and not mediated by a phosphorylation mechanism (DiFrancesco and Tortora, 1991), although phosphorylation-dependent processes may also be operating to modulate If at a different site (Chang et al., 1991; Accili et al., 1997).
In heart, neurotransmitter-induced control of cardiac rhythm is mediated by If through its second-messenger cAMP, whose synthesis is stimulated and inhibited by β-agonists and muscarinic agonists, respectively (DiFrancesco et al., 1986; DiFrancesco and Tromba, 1988a,b). Moderate vagal activity, such as the one present during basal vagal tone, controls cardiac rate by modulation of If, rather than by the opening of ACh-activated K+ channels (DiFrancesco et al., 1989).
Ionic, kinetic, and modulatory properties of the Ih current (the neuronal equivalent of the cardiac If) are similar to those of If (Pape, 1996). The physiological function of Ih depends on the cell type where the current is expressed and is based on its ability to generate a depolarization following an appropriate stimulus: a neurotransmitter-mediated input able to modify the intracellular cAMP content, or membrane hyperpolarization.
The neuronal Ih current contributes to controlling the resting potential, and hence modulating excitability, of several types of neurons, such as CA1 hippocampal neurons, DRG neurons, and cerebellar neurons (Maccaferri et al., 1993; Magee, 1998; Cardenas et al., 1999; Southan et al., 2000). In neurons where repetitive activity is present, Ih can contribute, like If does in cardiac pacemaker, to the firing discharge (McCormick and Pape, 1990; Kamondi and Reiner, 1991; Golowasch and Marder, 1992; Maccaferri and McBain, 1996; Thoby-Brisson et al., 2000). Several types of sensory neurons express h-channels, which may be involved either in the direct perception of external stimuli, or in modulating the transduction of sensory stimuli into electrical signals (Demontis et al., 1999; Vargas and Lucero, 1999; Stevens et al., 2001). Recently, h-channels have been involved in neuronal plasticity by their action at presynaptic membranes (Beaumont and Zucker, 2000; Mellor et al., 2002).
A significant advancement in the study of the molecular properties of pacemaker channels has been obtained with the cloning of a new family of channels (hyperpolarization-activated, cyclic nucleotide gated [HCN] family: Gauss et al., 1998; Ludwig et al., 1998; Santoro et al., 1998; Vaccari et al., 1999). HCN channels have a structure similar to that of K+ voltage–dependent (Kv) and cyclic nucleotide–gated (CNG) channels: six transmembrane domains with an S4 segment rich in positively charged residues, which in analogy to Kv channels may act as a voltage-sensitive element, a “pore” region between S5 and S6, and a consensus sequence for binding of cyclic nucleotides at the COOH terminus. The electrophysiological properties of HCN channels expressed heterologously clearly indicate that they are the molecular determinants of native pacemaker channels in the heart and nervous system (Gauss et al., 1998; Ludwig et al., 1998, 1999; Santoro et al., 1998; Ishii et al., 1999; Vaccari et al., 1999; Moroni et al., 2000).
The relevance of If in the control of heart rate makes it an important pharmacological target. Several f-channel blocker molecules have been developed, such as UL-FS49 (Goethals et al., 1993; DiFrancesco, 1994), ZD7288 (BoSmith et al., 1993; Gasparini and DiFrancesco, 1997; Shin et al., 2001), and ivabradine (Thollon et al., 1994; Bois et al., 1996). These molecules have been shown to induce heart rate slowing with limited inotropic side effects, and have therefore a potential for therapeutic application in those cases where it is useful to slow heart rate without altering other cardiovascular functions. A typical example is the treatment of angina pectoris or other clinical forms of myocardial ischemia, where cardiac rate slowing is beneficial since it decreases the oxygen requirement by cardiac muscle and thus reduces the risk of ischemic insult, but other modes of employment are possible. This underlines the significance of developing new drugs specifically interacting with pacemaker channels.
In this paper we have investigated the action of ivabradine on native f-channels in rabbit SAN cells. A previous study has shown that ivabradine acts with a high degree of specificity on f-channels from the intracellular side of the membrane (Bois et al., 1996). We find that ivabradine blocks f-channels when they are open, and its block is strongly use-dependent. The block is exerted preferentially when the current deactivates on depolarization, and is relieved when it activates on hyperpolarization.
Experiments where the current flow across f-channels is varied at a constant voltage (by use of Cs+, an extracellular If blocker, or of a low-Na+ extracellular solution) reveal that block depends on the current driving force, rather than simply on voltage. The block by ivabradine is similar to that exerted by intracellular cations on inwardly-rectifying K+ channels, and reveals that permeation across f-channels occurs as in multiion, single-file pores.
## Materials And Methods
The experiments performed in this work conformed with the guidelines of the care and use of laboratory animals established by Italian State (D.L. 116/1992) and European directives (86/609/CEE).
A description of the methods used to obtain and voltage-clamp isolated SAN cells has been published elsewhere (DiFrancesco et al., 1986; Accili et al., 1997). Briefly, New Zealand female rabbits (0.8–1.2 Kg) were anesthetized by intramuscular injection of 4.6 mg Kg−1 xylazine and 60 mg Kg−1 ketamine and killed by cervical dislocation. After full exanguination the heart was quickly removed and placed in prewarmed (37°C) Tyrode solution (in mM: NaCl, 140; KCl, 5.4; CaCl2, 1.8; MgCl2, 1; D-glucose, 5.5; HEPES-NaOH, 5; pH 7.4) containing 1,000 U heparin. SAN tissue was then surgically isolated and cut into 5–6 stripes, and the enzymatic cell dissociation procedure was performed. Collagenase (224 U ml−1, Worthington), Elastase (1.9 U ml−1, Sigma-Aldrich), and Protease (0.6 U ml−1, Sigma-Aldrich) were used to degrade intercellular matrix and loosen cell-to-cell adhesion in order to ease the mechanical cell dispersion procedure. Isolated single cells were maintained at 4°C in Tyrode solution for the day of the experiment. Cells under study were allowed to settle into plastic Petri dishes on the stage of an inverted microscope, and were perfused with Tyrode solution to which BaCl2 (1 mM), MnCl2 (2 mM), and, when required, 4-aminopiridine (5 mM) were added to ameliorate If dissection over other ionic components. Control and test solutions were delivered by a fast-perfusion temperature-controlled device allowing rapid solution changes.
Whole-cell pipettes were filled with an intracellular-like solution containing (in mM): K-Aspartate, 130; NaCl, 10; EGTA-KOH, 5; CaCl2, 2; MgCl2, 2; ATP (Na-salt), 2; creatine phosphate, 5; GTP (Na-salt), 0.1; pH 7.2. In low (35 mM) Na+ solution, Na+ was replaced by an equimolar amount of choline chloride. Ivabradine (3-(3-{[((7S)-3,4-dimethoxybicyclo [4,2,0] octa-1,3,5-trien7-yl) methyl] methylamino} propyl)-1,3,4,5-tetrahydro-7,8-dimethoxy-2H-3-benzazepin-2-one hydrochloride; Fig. 1 A) was added to the extracellular solution by dissolving a stock solution (0.1–10 mM) to the final concentration desired. The drug was provided by the Institut de Recherches Internationales Servier, France. All experiments were performed at the controlled temperature of 32 ± 0.5°C. Currents were recorded and on-line filtered at a corner frequency of 1 KHz with an Axopatch 200B amplifier, and acquired using the pClamp 7.0 software (Axon Instruments, Inc.).
## Results
Superfusion of SAN cells with ivabradine caused a reduction of whole-cell If current which accumulated in a concentration-dependent way during repetitive trains of activating/deactivating voltage steps (−100/+5 mV), until steady-state block was reached. Typical time courses of block onset and recovery and sample traces are shown for three different concentrations (0.3, 3, and 30 μM) in Fig. 1 B.
We obtained a mean steady-state current block of 19.5 ± 2.2% (n = 3), 65.9 ± 2.4% (n = 19), 87.8 ± 4.0% (n = 4); a time constant of block development (τon) of 90.2 ± 2.2 s (n = 3), 61.5 ± 2.7 s (n = 21), and 13.2 ± 0.7 s (n = 4); and a time constant of block recovery (τoff) of 159.4 ± 17.9 s (n = 3), 136.2 ± 16.2 s (n = 6), and 178.5 ± 39.8 s (n = 3) for 0.3, 3, and 30 μM ivabradine, respectively. Full current recovery following wash-off of the drug was apparently not achieved in the experiments shown in Fig. 1. However, due to its slow time course, recovery could be underestimated by the possible presence of current run-down, which occurs on a similar time scale (DiFrancesco et al., 1986).
A dose–response relation of the current block was obtained using the same voltage protocol for a wider range of drug concentrations (Fig. 1 C). Fitting the dose–response curve with the Hill equation yielded a half-block concentration of 1.5 μM and a slope coefficient of 0.8. The concentration dependence of If block was similar to that reported previously by Bois et al. (1996).
Experiments such as those in Fig. 1 indicate the presence of accumulation of inhibition during repetitive steps, a property that could reflect either relatively slow kinetics of drug-channel interaction, or that the f-channel block by ivabradine is use-dependent. To further investigate this aspect, in Fig. 2 we tried to establish whether the drug binds to f-channel closed and open configurations with different affinities. An If activation/deactivation protocol (same as in Fig. 1) was first applied; when perfusion with ivabradine started, the repetitive protocol was interrupted and the membrane voltage held for 100 s at −35 mV, where f-channels are closed.
Fig. 2 A, top, shows the time course of the If amplitude at −100 mV; sample current traces recorded at various times (a–d, as indicated) are plotted in the lower panel. Ivabradine (3 μM) did not show appreciable affinity for the closed conformation of f-channels since the current amplitude at −100 mV just after resuming the pulsing protocol was not decreased by the long exposure to the drug at −35 mV. As expected, reapplication of activating/deactivating steps in the presence of the drug caused the current to decrease with a time course similar to that in Fig. 1on = 51.9 s; fractional steady-state block = 0.513). Similar results were obtained in n = 4 cells, where the current size during the first step to −100 mV after the rest period at –35 mV (for 66–120 s) was 99.2 ± 1.3% of the control size. These data indicate that channel opening is a necessary requirement for ivabradine-induced block to occur.
Further, we checked if channel unblocking takes place when f-channels are in the closed configuration (Fig. 2 B). Following full block elicited by a standard activation/deactivation protocol, the membrane was held at −35 mV while the drug was washed off. After 90 s at −35 mV, the repetitive protocol was resumed and, as apparent from Fig. 2 B, the current amplitude was the same as at the end of the blocking protocol, indicating that no block reduction had occurred while clamping at −35 mV. Removal of block then proceeded normally (τoff = 94.8 s). Similar data were obtained in n = 3 cells, where the current size during the first step following the rest period (84 to 144 s) was 101.2 ± 3.0% of the control size.
The results of Fig. 2 argue in favor of the hypothesis that binding/unbinding reactions are limited to the open state. This can be interpreted by assuming that the drug can only access the blocking site while the channel is open. It is noteworthy that a “trapping” mechanism has been proposed for HCN channel block by another molecule (ZD7288), on the basis of experiments suggesting the existence of a wide intracellular channel vestibule which is normally inaccessible when the channel is closed and able to confine the drug molecule upon closing (Shin et al., 2001).
As well as a dependence on the open/closed state of f-channels, block due to molecules such as UL-FS49 and ZD7288 also displays a voltage dependence, with hyperpolarization favoring the unblocking process (DiFrancesco, 1994; Shin et al., 2001).
To study the voltage dependence of the action of ivabradine on If, in view of the requirement of open channels for block to occur, we first investigated steady-state block at voltages negative to the activation threshold, where f-channels are open at least part of the time, by applying long activating steps. We found that the block exerted by the drug using this protocol was much smaller than that observed with activating/deactivating protocols (as in Figs. 1 and 2) with the same hyperpolarizing step. A comparison between the two different protocols is shown in Fig. 3. On the left (Fig. 3 A), 3 μM ivabradine applied during activation/deactivation protocols (−70/+5 mV, upper; −100/+5 mV, lower) caused a current reduction of 78.3% at −70 mV and 63.9% at −100 mV (compare control records with full-block records labeled by asterisks). These data are in accordance with the results in Fig. 1. Panels on the right (Fig. 3 B), on the other hand, refer to experiments where steps of several tens of seconds were applied to −70 (upper) and −100 mV (lower), and the drug applied after the current had reached steady-state activation. The current decrease with the latter protocol was much smaller (14.0% at −70 mV and 6.0% at −100 mV). The mean If block was 12.4 ± 1.9% at −70 mV (n = 4) and 6.4 ± 1.6% at −100 mV (n = 5) with the long-step protocols, as compared with the values of 77.4 ± 5.7% at −70 mV (n = 4) and 65.9 ± 2.4% at −100 mV (n = 19, as reported above) resulting from activating/deactivating protocols.
These large differences indicate that the binding reaction is strongly affected by voltage, and by the actual voltage protocol used. The use of long-step protocols shows that the steady-state blocking action of the drug at hyperpolarized voltages is modest, and suggests that most of the drug action observed with activating/deactivating protocols as in Figs. 1, 2, and 3 A is not exerted during the hyperpolarizing step, but during the decay tail at 5 mV, when the channels are open and the membrane depolarized. According to this hypothesis, while on the one hand the channels need to be open for block to occur, on the other the block occurs mostly during current deactivation and is relieved during current activation.
To verify the presence of hyperpolarization-induced block relief, we used the analysis shown in Fig. 4. A standard activation/deactivation protocol was first applied and the drug (3 μM) was perfused until steady-state block developed. On the left side of Fig. 4 A the control trace and traces recorded after 30, 60, and 174 s (corresponding to steady-state block) of drug perfusion are plotted in sequence. Once steady-state block was achieved, while still in the presence of the drug, a long (40 s) step to –100 mV was applied, during which the current underwent a gradual, slow increase with time toward levels approaching that in control conditions. A double exponential fit of the current during the 40-s long step (Fig. 4 B) clearly revealed the presence of two kinetically distinct processes. Although the early part of current activation developed with a time constant of 235 ms, a value similar to that in control conditions (259 ms), the late one increased with a much slower time constant of 7.5 s; this second process did not therefore reflect normal activation kinetics, but rather removal of block possibly associated with unbinding of the drug. Resuming the activation/deactivation protocol reestablished the previous current block, as apparent from the set of current traces recorded every 30 s after the long step to −100 mV (right side of Fig. 4 A).
In n = 6 cells, the mean time constants of current activation at −100 mV in control, in the early fraction of >25-s long steps to –100 mV during drug perfusion (3 μM), and on return from the long-step protocol were: 215 ± 15, 237 ± 28, and 213 ± 14 ms, respectively, whereas the slow increase at −100 mV had a time constant of 7.0 ± 0.5 s. These data indicate that ivabradine, as has been proposed for UL-FS49 and ZD7288 (Goethals et al., 1993; DiFrancesco, 1994; Shin et al., 2001), binds to the channel blocking site less favorably at hyperpolarized than at depolarized voltages.
We next proceeded to quantify the degree of block in a fuller range of activation/deactivation voltages by measuring the current decrease caused by 3 μM ivabradine at steady-state in the range −110 to 20 mV. At voltages below threshold (more negative than −40 mV), we applied long steps to the test voltage to activate the current at steady-state and then applied the drug until block developed fully, as in Fig. 3 B; in Fig. 5, mean fractional block values thus obtained are plotted as open circles.
In agreement with the observation in Fig. 4 that hyperpolarization removes block, the fraction of blocked current decreased at more negative voltages; at −100 mV the blocked fraction was small (6.4 ± 1.6%). We took advantage of these features to measure the fractional block at voltages above threshold. We held the membrane at a variable test potential (−40 to 20 mV) and applied repetitive (1/6 Hz) steps to −100 mV for 1.2 s; since at −100 mV little block occurs, and block removal requires relatively long times (mean time constant of 7.0 s, see above), we can assume that the block obtained with this protocol develops essentially entirely during deactivation at the test voltage. The mean fractional current block values measured by these protocols are also plotted in Fig. 5 as filled circles.
Like ivabradine, other If-blocking molecules such as UL-FS49 and ZD7288 have been reported to cause a voltage-dependent block whose efficiency increases at more positive potentials (DiFrancesco, 1994; Shin et al., 2001). However, the action of ivabradine is peculiar in that a relatively sharp change of fractional block appears when going from voltages where the current is just inward (i.e., −20 mV) to voltages where the current is just outward (i.e., −10 mV); the block at deactivating voltages then seems to level up to a constant value of ∼0.56. This behavior differs from that of a purely voltage-dependent block mechanism, such as those attributable to charged molecules entering the pore for a fraction of the membrane voltage drop (Hille, 2000), according to which the current inhibition would be expected to increase smoothly at more depolarized potentials up to full block. The observation of a steep change in the degree of block across the expected current reversal potential led us to hypothesize that the direction of current flow could be a determinant of block.
To test this hypothesis, we searched for conditions able to discriminate between the efficiency of ivabradine blocking action in the presence and absence of current flow. One way to obtain this was to use Cs+, which is a known extracellular blocker of If. Cs+ ions block f-channels in a voltage-dependent way, by entering channels from outside for a fraction (∼71%) of the membrane electric field on hyperpolarization (DiFrancesco, 1982). Although Cs+ blocks the current flow, it does not affect channel opening/closing, as shown for example by envelope tests used to verify the time course of channel opening (by application of hyperpolarizing steps of variable duration and a fixed depolarizing step where tail currents are measured) in the presence of Cs+ (DiFrancesco, 1982). Thus, in the presence of Cs+, channels enter their open state normally on hyperpolarization, but no (or little) current is carried across the membrane.
In Fig. 6, we applied a protocol similar to that in Fig. 4, but added 5 mM Cs+ to the perfusate during the prolonged (45 s) hyperpolarization to −100 mV, following steady-state block by 3 μM ivabradine achieved by a standard activation/deactivation protocol (−100/+5 mV: sample traces representing control [cont], mid [a], and full block [b] are shown on the left side). As expected, during perfusion with Cs+ the current was strongly reduced. At the end of the long hyperpolarizing step, Cs+ was washed out and the repetitive –100/+5 mV protocol resumed in the continuous presence of the drug. As apparent from the traces on the right side of Fig. 6, the current elicited by the first step to −100 mV following Cs+ perfusion had not recovered toward the control amplitude, but had kept the amplitude after full ivabradine block (compare trace d with trace b in the inset).
The protocol was repeated in the same cell in the absence of Cs+, under which conditions block removal was normally observed during the prolonged step to −100 mV, as in Fig. 4 (unpublished data). In n = 5 cells where the protocol of Fig. 6 was applied, the size of the current recorded after steady-state block by ivabradine (3 μM) and after a long step in the presence of Cs+ (trace d) was essentially identical (101.4 ± 2.9%) to that preceding Cs+ perfusion (trace c). In n = 10 cells, the current increased up to 137.9 ± 3.7% of the control value with the same protocol in the absence of Cs+ (see Fig. 4). These data illustrate a remarkable difference between the actions of ivabradine and of another If blocker, UL-FS49 (DiFrancesco, 1994), and suggest that voltage hyperpolarization alone is not sufficient to remove block of open f-channels by ivabradine.
The most economical interpretation of the Cs+ result is that relief of ivabradine block involves a current-dependent “kick-off” mechanism, similar to that operating in channels with multiion, single-file pores when a blocking ion is present (Hille, 2000). However, since the possibility could not be excluded that Cs+-bound f-channels, although open, have a modified affinity for ivabradine, we investigated the action on ivabradine block of changes in the current flow caused by changes in the chemical gradient, rather than in membrane voltage.
In Fig. 7, the fractional block curve measured in the control Tyrode solution, as replotted from Fig. 5 (filled circles), is compared with that obtained with similar protocols in a low (35 mM) Na+ solution (open circles). The low Na+ curve still displays a region of steep slope and has an overall voltage-dependence similar to the curve in Tyrode, but is shifted to more negative voltages. The shift is such as to determine a large difference of blocking degrees between the two curves at intermediate voltages. For example, at −30 mV, the fractional block was 0.266 ± 0.014 in normal Na+, and 0.607 ± 0.016 in the low Na+ solution (see inset current traces).
An interesting feature in Fig. 7 is that in both curves, the region of steepest slope was located across the expected If reversal potential (Ef). In n = 7 cells, we measured fully activated I/V relations in normal and reduced external Na+ according to standard protocols (DiFrancesco et al., 1986); Ef obtained from linear fitting of mean I/V curves (see Fig. 8) was −16.0 mV in normal Tyrode solution and −34.4 mV in low Na+ solution, with a shift of 18.4 mV. These values are indicated by dotted lines in Fig. 7 B, and clearly intercept the I/V curves in their regions of steepest slope.
These data agree with the results of Fig. 6 to indicate that the direction of ionic flow is a main determinant of the degree of block by ivabradine, i.e., block appears to be current-dependent rather than voltage-dependent, a feature typical of intracellular cation block of inward rectifiers.
To illustrate the inwardly rectifying action of ivabradine, in Fig. 8 mean fully activated I/V relations from n = 7 cells exposed to normal and reduced external Na+ concentration are shown in control conditions (top) and under conditions of steady-state block by 3 μM ivabradine (bottom). The latter curves were simply obtained by multiplying control curves of the top panel of Fig. 8 by the corresponding fractional block from Fig. 7. It is apparent that the drug induces inwardly rectifying behavior in both I/V relations, with the reduction of current becoming substantial at voltages E>Ef, independently of the Ef value.
## Discussion
Previous work (Bois et al., 1996) has indicated that ivabradine, a recently developed drug able to specifically slow cardiac rate in the absence of significant side effects on cardiac inotropism (Thollon et al., 1994; Peréz et al., 1995) acts in SAN cells by selectively blocking If channels from the intracellular side.
Other drugs have been synthesized on the basis of their ability to slow heart rate, such as UL-FS49 and ZD7288, and are known to exert their action by blocking the cardiac If current or the related neuronal Ih current (BoSmith et al., 1993; Goethals et al., 1993; DiFrancesco, 1994; Pape, 1994; Gasparini and DiFrancesco, 1997). The correlation between the inhibitory action on If and the bradycardic effect is a direct consequence of the relevance of If to the generation and control of pacemaker activity in mammalian heart (DiFrancesco, 1993). The aim of the present study was to provide a more detailed characterization of the blocking effect of ivabradine on f-channels in SAN cells.
When tested by an activation/deactivation protocol, ivabradine blocked If in a dose-dependent manner (Fig. 1) with a half-block concentration of 1.5 μM, a value similar to that obtained by Bois et al., 1996 (2.18 μM). Peréz et al. (1995) report an half inhibitory concentration of 8.5 μM for the effect of ivabradine on pacing rate in isolated guinea-pig right atria. These data agree with the view that the If block is the physiological target of the rate-reducing activity of the drug.
Use-dependent block is a mode of action shared by several ion channel blockers and arises from a preferential affinity of the drug for a specific conformational state of the channel (Hille, 2000). The experiments in Fig. 2 revealed that ivabradine binding/unbinding reactions are restricted to open f-channels only. The simplest interpretation of this finding is that the drug needs to enter a fraction of the channel pore before binding to the blocking site. According to this view, ivabradine molecules are able to access their binding site, located within the pore, and block ion flow only when the channel gate is open; when bound, drug molecules are confined from the intracellular environment by the channel gate. This idea agrees with the notion that native f/h- and HCN channels have a wide inner hydrophilic vestibule guarded by an intracellular gate (Shin et al. 2001). The structural analysis of HCN channels (Chen et al., 2001; Shin et al., 2001; Viscomi et al., 2001; Wainger et al., 2001; Wang et al., 2001) points to the involvement of the COOH terminus and the S6 segments as physical domains involved in channel gating. Thus, trapping of the drug could be associated with physical occlusion of a large intracellular channel vestibule by the interaction between these and possibly other channel domains. A recent investigation using cystein mutagenesis has confirmed the hypothesis that blocking molecules can be trapped inside HCN channels by an intracellular structure controlling voltage-dependent gating (Rothberg et al., 2002).
### Ivabradine Causes a Current-dependent Block of f-channels
A block mechanism exclusively based on restricted access to the drug binding site, i.e., where drug-channel interactions are entirely controlled by the balance between open and closed states, should be expected to produce a voltage dependence of block similar to that of channel gating. However, binding can itself be voltage-dependent, in which case the occurrence of block is a more complex function of channel state and drug binding conditions. Several agents block hyperpolarization-activated channels in a voltage-dependent way. Block of If in SAN cells by extracellular Cs+ ions is markedly voltage-dependent and increases at negative voltages (DiFrancesco, 1982). In contrast to heart rate–reducing agents, Cs+ blocks If from outside, and its action can be explained by the simple assumption that Cs+ ions enter the channel pore for a fraction of the electrical distance (∼71% from outside) before binding to the blocking site (DiFrancesco, 1982), according to a model developed originally to explain Na+ channel block by hydrogen ions (Woodhull, 1973; see Hille, 2000). The block exerted on If by UL-FS49, a rate-reducing agent, is also voltage-dependent, but exerted this time from the intracellular side of the channel and decreasing at more negative potentials; Woodhull block model applied to UL-FS49 yields a blocking site located within the pore at some 61% of the electrical distance from outside (DiFrancesco, 1994).
Another rate-reducing molecule, ZD7288, has also been reported to exert a voltage-dependent block on hyperpolarization-activated channels (Shin et al., 2001). ZD7288, UL-FS49, and ivabradine (structures in Lillie and Kobinger, 1986; Harris and Constanti, 1995; Fig. 1), are all permanently charged cations at physiological pH and therefore require an aqueous pathway to reach the blocking site within the channel pore. This agrees with the idea that a wide aqueous vestibule exists in the pore of If channels (Shin et al., 2001), and that rate-reducing agents block If by interacting with channels within the pore vestibule.
In a detailed investigation of the action of ZD7288, Shin et al. (2001) interpreted the combined open-channel block and hyperpolarization-induced block relief properties of this molecule in terms of kinetic models where drug-channel interactions only occur when channels are open, but at the same time the drug affinity changes with the channel state. In one model, affinity is higher for bound-closed than for bound-open states, whereas in a second model two open states with different binding affinities and no binding to closed states are hypothesized. According to this interpretation, bound drug molecules remain “trapped” by channel gates and cannot be released from the binding site when channels are in the closed (or in a secondary open) state. In both models, smooth fractional block curves increasing with hyperpolarization are generated as experimentally observed, and the voltage dependence of block directly reflects that of the rate constants of drug binding to the different states of the channel.
Despite the similarities between ivabradine and ZD7288 in their actions, such as the open-channel block and block relief by hyperpolarization, the effect of ivabradine reported here does not appear to conform to models where block is governed by voltage-dependent drug binding.
First, in the presence of external Cs+, hyperpolarization does not relieve a previously induced block (Fig.6). Early experimentation on If in Purkinje fibers (DiFrancesco, 1982) and in the SAN (DiFrancesco et al., 1986) has shown, by use of envelope tests, that in the presence of Cs+, current activation during hyperpolarization proceeds normally, even if channels are blocked. The activation time-course was reported to be either moderately accelerated (DiFrancesco, 1982) or unchanged by Cs+ (DiFrancesco et al., 1986). Since therefore Cs+ blocks inward current without impairing channel activation gating, Cs+-induced If inhibition should not prevent a hyperpolarization-driven removal of ivabradine molecules from their binding sites if this simply depended on gates being open. The Cs+ result discriminates between a current-dependent and a voltage-dependent block relief mechanism, and rules in favor of the former. Although the requirement of restricted drug interaction with open channels typical of “open channel blockers” remains (i.e., ivabradine has a preferential affinity to open channels), the results with Cs+ (Fig. 6) imply that on hyperpolarization, even when channels are open, the drug remains associated with the blocking site unless inward current is flowing.
Second, experiments where the driving force is changed by varying chemical gradients, rather than voltage (Fig. 7), strengthen the evidence that block is indeed a function of the driving force, and not of voltage alone. The block curve in normal Tyrode solution (Fig. 5) did not have a smooth voltage dependence, as expected for a pure voltage-dependent process, but showed a steep slope just across the If reversal potential, suggesting the possibility that block could be affected by changes in the direction of current flow. This was confirmed by the use of low Na+ solutions, which produced a block curve which was shifted relative to the curve in Tyrode by approximately the same shift in reversal potential (−22.5 mV, see Fig. 7). Also in low Na+, a steep slope in the block curve was measured across the new reversal.
A voltage-dependent binding within the channel pore of a blocking molecule not competing with permeable ions would yield a simple Boltzmann distribution function for the fractional block b (b = ratio between blocked channels and channels in control), as follows:
$\mathrm{b}={\mathrm{1}}/{\left\{\mathrm{1}+\mathrm{exp[(}{-}{\mathrm{z}^{\mathrm{{^\prime}}}}/{\left({\mathrm{RT}}/{\mathrm{F}}\right)\mathrm{)}}{\times}\left(\mathrm{E}{-}\mathrm{E}_{\mathrm{1/2}}\right)\mathrm{]}\right\}}$
(1)
where z′ is the equivalent valence of the blocking charge (i.e., z′ = z δ is the valence z of the blocking molecule multiplied by the fraction δ of electrical field crossed to reach the binding site) and E1/2 is the voltage at which half block occurs. For example, this type of treatment and the above equation (with a negative value of z′ since block is in this case exerted from outside and increasing on hyperpolarization), apply to the If block by external Cs+ ions (DiFrancesco, 1982). Obviously, in this model, z′ cannot be higher than z, the valence of the blocking drug.
On the other hand, multiion single-file models where the blocking molecules compete with permeable ions for the same binding sites, generate a voltage dependence of block that can be steeper than is justifiable by the above Boltzmann distribution (Hille and Schwarz, 1978; Hille, 2000). The steepness of the conductance-voltage relation during block is therefore used as a means to identify channels behaving as multiion, single-file pores (Hille, 2000).
Although the block curves in Fig. 7 do not display a Boltzmann-type of voltage-dependence, clearly their maximal slope is not compatible with a simply voltage-dependent block. The derivative of the block function against voltage at E = E1/2 is calculated from Eq. 1 as z′/(4RT/F); since at physiological pH ivabradine has a net valence of ∼1 (Delpon et al., 1996), the slope expected in this case should be 0.0097 mV−1 for both curves. However, the slopes in Fig. 7, as measured in the steepest region (across reversal potentials), are 0.0265 and 0.0236 mV−1 for control and low Na+ (requiring z′ = 2.7 and 2.4), respectively.
As well as a steep change of block degree, the fractional block of If by ivabradine undergoes a change in the mode of voltage dependence when the direction of current flow reverses (Fig. 7); whereas a shallow voltage-dependent decrease of block on hyperpolarization appears at E < Ef (inward current), the curve flattens up to a constant block (∼60%) at E > Ef (outward current). The reason for this behavior is unclear, but it suggests that the blocking mechanism may be more complex than that arising from antagonism between drug and permeable ions for binding sites within the pore, and may involve changes in the drug–channel interactions which depend on the direction of current flow. It is also possible, however, that the block is underestimated at the most depolarized voltages, since the time for drug–channel interaction becomes shorter with higher depolarization due to shortening of the deactivation time constant (DiFrancesco, 1999).
A possible dependence of HCN channel block by ZD7288 on the direction of current flow was not directly investigated by Shin et al. (2001). However, even in the hypothesis that this type of block could accommodate the very steep voltage-dependence reported for the ZD7288 block curve (z δ = 4.2), the latter curve differs importantly from the one described here; in fact, the voltage region of steepest slope is far more negative that the current reversal potential (see Shin et al., 2001, Fig. 3). This property cannot be easily reconciled with a current-dependent block. The If block by UL-FS49 also does not appear to be current-dependent, since Cs+ does not impair block relief by hyperpolarization (DiFrancesco, 1994). This indicates that even if If blockers have similar properties, different types of block may indeed be operating.
### Ivabradine Induces f-channel Inward Rectification which Depends on E-Ef
The dependence upon driving force, rather than voltage, is a property of inwardly rectifying K+ channels (Kir). Typically, rectification in these channels depends on E − EK rather than E, i.e., a strong current reduction is observed only at voltages more positive than the K+ equilibrium potential, EK.
Rectification in Kir channels is attributable to channel block by internal cations, and its dependence on the ion-driving force, rather than voltage alone, is explained by the assumption that Kir channels are multiion, single-file pores (Hille and Schwarz, 1978; see Hille, 2000). In multiion, single-file pores, ions move in an ordered fashion along a set of binding sites extending across the full length of the channel, and several ions can simultaneously occupy available binding sites. Under these conditions, blocking ions (i.e., ions which cannot overcome one of the energy barriers along the pore path) are driven in and out of the blocking site along with the flow of ions, which therefore determines the extent of block.
Multiion single-file pores can in fact explain several other properties of K+ channels (Hille and Schwarz, 1978). Direct confirmation of single-file arrangement in Kcsa K+ channels has been provided recently by X-ray crystallographic analysis by MacKinnon and collaborators (Morais-Cabral et al., 2001; Zhou et al., 2001).
Hyperpolarization-activated channels are structurally similar to voltage-dependent K+ (Kv) channels (Santoro and Tibbs, 1999), and have in particular the same GYG selectivity filter motif, which is at the basis of the multiion permeation properties of K+ channels (Morais-Cabral et al., 2001; Zhou et al., 2001). In view of the conservation of ion conduction pores among K+ channels (Lu et al., 2001), they too are therefore likely to have a pore with multiion, single-file conduction properties. Indeed, some of the permeability properties of pacemaker channels (in rods: h-channels) require multiple ion binding sites (Wollmuth, 1995).
In conclusion, we find that the If block by ivabradine has unusual properties when compared with the block exerted on If by other rate-reducing agents (such as UL-FS49 and ZD7288). These properties are summarized by the dependence of block on the current driving force (E-Ef) rather than on voltage alone. Ivabradine block of If is therefore similar to the one exerted by blocking cations on Kir channels and responsible for inward rectification. Our data agree with the notion that f-channels have multiion, single-file pores, and that ivabradine blocks current flow by entering the pore from the intracellular side and competing with permeating ions for a binding site along the permeation pathway. Ivabradine molecules bind preferentially to open channels and cannot reach or leave their binding site when the channel gates are closed; block relief on hyperpolarization, however, requires not only that drug molecules are freed from trapping, but also that an active “pushing” mechanism operates during inward current flow to displace drug molecules and drive them out of the pore.
Finally, the use-dependence of drug action resulting from the specific features of If block by ivabradine and the high affinity of interaction with If channels (Bois et al., 1996) make this substance particularly suitable for possible use in a clinical setting, since they amplify the rate-reducing ability at high frequencies, i.e., when it is most needed, and reduce possible side effects due to interaction of the molecule with other ion channels, respectively.
## Acknowledgments
We thank C. Altomare for contributing to part of the experiments.
We thank the Institut de Recherches Internationales Servier for providing support for this work and A. Moroni and C. Viscomi for discussion.
*
Abbreviations used in this paper: CNG, cyclic nucleotide gated; HCN, hyperpolarization-activated, cyclic nucleotide gated; SAN, sino-atrial node.
## References
Accili, E.A., G. Redaelli, and D. DiFrancesco.
1997
. Differential control of the hyperpolarization-activated (If) current by cAMP and phosphatase inhibition in rabbit sino-atrial node myocytes.
J. Physiol.
500
:
643
–651.
Beaumont, V., and R.S. Zucker.
2000
. Enhancement of synaptic transmission by cyclic AMP modulation of presynaptic Ih channels.
Nat. Neurosci.
3
:
133
–141.
Bois, P., J. Bescond, B. Renaudon, and J. Lenfant.
1996
. Mode of action of bradycardic agent, S16257, on ionic currents of rabbit sinoatrial node cells.
Br. J. Pharmacol.
118
:
1051
–1057.
BoSmith, R.E., I. Briggs, and N.C. Sturgess.
1993
. Inhibitory actions of ZENECA ZD7288 on whole-cell hyperpolarization activated inward current (If) in guinea-pig dissociated sinoatrial node cells.
Br. J. Pharmacol.
110
:
343
–349.
Brown, H.F., and D. DiFrancesco.
1980
. Voltage-clamp investigations of membrane currents underlying pacemaker activity in rabbit sino- atrial node.
J. Physiol.
308
:
331
–351.
Brown, H.F., D. DiFrancesco, and S.J. Noble.
1979
. How does adrenaline accelerate the heart?
Nature.
280
:
235
–236.
Cardenas, C.G., L.P. Mar, A.V. Vysokanov, P.B. Arnold, L.M. Cardenas, D.J. Surmeier, and R.S. Scroggs.
1999
. Serotonergic modulation of hyperpolarization-activated current in acutely isolated rat dorsal root ganglion neurons.
J. Physiol.
518
:
507
–523.
Chang, F., I.S. Cohen, D. DiFrancesco, M.R. Rosen, and C. Tromba.
1991
. Effects of protein kinase inhibitors on canine Purkinje fibre pacemaker depolarization and the pacemaker current If.
J. Physiol.
440
:
367
–384.
Chen, J., J.S. Mitcheson, M. Tristani-Firouzi, M. Lin, and M.C. Sanguinetti.
2001
. The S4-S5 linker couples voltage sensing and activation of pacemaker channels.
98
:
11277
–11282.
Delpon, E., C. Valenzuela, O. Perez, L. Franqueza, P. Gay, D.J. Snyders, and J. Tamargo.
1996
. Mechanisms of block of a human cloned potassium channel by the enantiomers of a new bradycardic agent: S-16257-2 and S-16260-2.
Br. J. Pharmacol.
117
:
1293
–1301.
Demontis, G.C., B. Longoni, U. Barcaro, and L. Cervetto.
1999
. Properties and functional roles of hyperpolarization-gated currents in guinea-pig retinal rods.
J. Physiol.
515
:
813
–828.
DiFrancesco, D.
1981
a. A new interpretation of the pace-maker current in calf Purkinje fibres.
J. Physiol.
314
:
359
–376.
DiFrancesco, D.
1981
b. A study of the ionic nature of the pace-maker current in calf Purkinje fibres.
J. Physiol.
314
:
377
–393.
DiFrancesco, D.
1982
. Block and activation of the pace-maker channel in calf Purkinje fibres: effects of potassium caesium and rubidium.
J. Physiol.
329
:
485
–507.
DiFrancesco, D.
1993
. Pacemaker mechanisms in cardiac tissue.
Annu. Rev. Physiol.
55
:
451
–467.
DiFrancesco, D.
1994
. Some properties of the UL-FS 49 block of the hyperpolarization-activated current (If) in sino-atrial node myocytes.
Pflugers Arch.
427
:
64
–70.
DiFrancesco, D.
1999
. Dual allosteric modulation of pacemaker (f) channels by cAMP and voltage in rabbit SA node.
J. Physiol.
515
:
367
–376.
DiFrancesco, D., P. Ducouret, and R.B. Robinson.
1989
. Muscarinic modulation of cardiac rate at low acetylcholine concentrations.
Science.
243
:
669
–671.
DiFrancesco, D., A. Ferroni, M. Mazzanti, and C. Tromba.
1986
. Properties of the hyperpolarizing-activated current (If) in cells isolated from the rabbit sino-atrial node.
J. Physiol.
377
:
61
–88.
DiFrancesco, D., and P. Tortora.
1991
. Direct activation of cardiac pacemaker channels by intracellular cyclic AMP.
Nature.
351
:
145
–147.
DiFrancesco, D., and C. Tromba.
1988
a. Inhibition of the hyperpolarization-activated current (if) induced by acetylcholine in rabbit sino-atrial node myocytes.
J. Physiol.
405
:
477
–491.
DiFrancesco, D., and C. Tromba.
1988
b. Muscarinic control of the hyperpolarization-activated current (If) in rabbit sino-atrial node myocytes.
J. Physiol.
405
:
493
–510.
Gasparini, S., and D. DiFrancesco.
1997
. Action of the hyperpolarization-activated current (Ih) blocker ZD 7288 in hippocampal CA1 neurons.
Pflugers Arch.
435
:
99
–106.
Gauss, R., R. Seifert, and U.B. Kaupp.
1998
. Molecular identification of a hyperpolarization-activated channel in sea urchin sperm.
Nature.
393
:
583
–587.
Goethals, M., A. Raes, and P.P. van Bogaert.
1993
. Use-dependent block of the pacemaker current If in rabbit sinoatrial node cells by zatebradine (UL-FS 49). On the mode of action of sinus node inhibitors.
Circulation.
88
:
2389
–2401.
Golowasch, J., and E. Marder.
1992
. Ionic currents of the lateral pyloric neuron of the stomatogastric ganglion of the crab.
J. Neurophysiol.
67
:
318
–331.
Harris, N.C., and A. Constanti.
1995
. Mechanism of block by ZD 7288 of the hyperpolarization-activated inward rectifying current in guinea pig substantia nigra neurons in vitro.
J. Neurophysiol.
74
:
2366
–2378.
Hille, B., and W. Schwarz.
1978
. Potassium channels as multi-ion single-file pores.
J. Gen. Physiol.
72
:
409
–442.
Hille, B. 2000. Ionic channels of excitable membranes. 3rd ed. Sinauer Associates, Inc., Sunderland, MA.
Ishii, T.M., M. Takano, L.H. Xie, A. Noma, and H. Ohmori.
1999
. Molecular characterization of the hyperpolarization-activated cation channel in rabbit heart sinoatrial node.
J. Biol. Chem.
274
:
12835
–12839.
Lillie, C., and W. Kobinger.
1986
. Investigations into the bradycardic effects of UL-FS 49 (1,3,4,5-tetrahydro-7,8-dimethoxy-3-[3-[[2-(3,4-dimethoxyphenyl) ethyl] methylimino] propyl]-2H-3-benzazepin-2-on-hydrochloride) in isolated guinea pig atria.
J. Cardiovasc. Pharmacol.
8
:
791
–797.
Lu, Z., A.M. Klem, and Y. Ramu.
2001
. Ion conduction pore is conserved among potassium channels.
Nature.
413
:
809
–813.
Ludwig, A., X. Zong, M. Jeglitsch, F. Hofmann, and M. Biel.
1998
. A family of hyperpolarization-activated mammalian cation channels.
Nature.
393
:
587
–591.
Ludwig, A., X. Zong, J. Stieber, R. Hullin, F. Hofmann, and M. Biel.
1999
. Two pacemaker channels from human heart with profoundly different activation kinetics.
EMBO J.
18
:
2323
–2329.
Kamondi, A., and P.B. Reiner.
1991
. Hyperpolarization-activated inward current in histaminergic tuberomammillary neurons of the rat hypothalamus.
J. Neurophysiol.
66
:
1902
–1911.
Maccaferri, G., M. Mangoni, A. Lazzari, and D. DiFrancesco.
1993
. Properties of the hyperpolarization-activated current in rat hippocampal CA1 pyramidal cells.
J. Neurophysiol.
69
:
2129
–2136.
Maccaferri, G., and C.J. McBain.
1996
. The hyperpolarization-activated current (Ih) and its contribution to pacemaker activity in rat CA1 hippocampal stratum oriens-alveus interneurones.
J. Physiol.
497
:
119
–130.
Magee, J.C.
1998
. Dendritic hyperpolarization-activated currents modify the integrative properties of hippocampal CA1 pyramidal neurons.
J. Neurosci.
18
:
7613
–7624.
McCormick, D.A., and H.C. Pape.
1990
. Properties of hyperpolarization-activated cation current and its role in rhythmic oscillation in thalamic relay neurones.
J. Physiol.
431
:
291
–318.
Mellor, J., R.A. Nicoll, and D. Schmitz.
2002
. Mediation of hippocampal mossy fiber long-term potentiation by presynaptic ih channels.
Science.
295
:
143
–147.
Morais-Cabral, J.H., Y. Zhou, and R. MacKinnon.
2001
. Energetic optimization of ion conduction rate by the K+ selectivity filter.
Nature.
414
:
37
–42.
Moroni, A., A. Barbuti, C. Altomare, C. Viscomi, J. Morgan, M. Baruscotti, and D. DiFrancesco.
2000
. Kinetic and ionic properties of the human HCN2 pacemaker channel.
Pflugers Arch.
439
:
618
–626.
Noble, D., and R.W. Tsien.
1968
. The kinetics and rectifier properties of the slow potassium current in calf Purkinje fibres.
J. Physiol.
195
:
185
–214.
Pape, H.C.
1994
. Specific bradycardic agents block the hyperpolarization-activated cation current in central neurons.
Neuroscience.
59
:
363
–373.
Pape, H.C.
1996
. Queer current and pacemaker: the hyperpolarization-activated cation current in neurons.
Annu. Rev. Physiol.
58
:
299
–327.
Peréz, O., P. Gay, L. Franqueza, R. Carron, C. Valenzuela, E. Delpon, and J. Tamargo.
1995
. Effects of the two enantiomers, S-16257-2 and S-16260-2, of a new bradycardic agent on guinea-pig isolated cardiac preparations.
Br. J. Pharmacol.
115
:
787
–794.
Rothberg, B.S., K.S. Shin, P.S. Phale, and G. Yellen.
2002
. Voltage-controlled gating at the intracellular entrance to a hyperpolarization-activated cation channel.
J. Gen. Physiol.
119
:
83
–91.
Santoro, B., D.T. Liu, H. Yao, D. Bartsch, E.R. Kandel, S.A. Siegelbaum, and G.R. Tibbs.
1998
. Identification of a gene encoding a hyperpolarization-activated pacemaker channel of brain.
Cell.
93
:
717
–729.
Santoro, B., and G.R. Tibbs.
1999
. The HCN gene family: molecular basis of the hyperpolarization-activated pacemaker channels. In: Molecular and functional diversity of ion channels and receptors.
.
868
:
741
–764.
Shin, S.K., B.S. Rotheberg, and G. Yellen.
2001
. Blocker state dependence and trapping in hyperpolarization-activated cation channels: evidence for an intracellular activation gated.
J. Gen. Physiol.
117
:
91
–101.
Southan, A.P., N.P. Morris, G.J. Stephens, and B. Robertson.
2000
. Hyperpolarization-activated currents in presynaptic terminals of mouse cerebellar basket cells.
J. Physiol.
526
:
91
–97.
Stevens, D.R., R. Seifert, B. Bufe, F. Muller, E. Kremmer, R. Gauss, W. Meyerhof, U.B. Kaupp, and B. Lindemann.
2001
. Hyperpolarization-activated channels HCN1 and HCN4 mediate responses to sour stimuli.
Nature.
413
:
31
–635.
Thoby-Brisson, M., P. Telgkamp, and J.M. Ramirez.
2000
. The role of the hyperpolarization-activated current in modulating rhythmic activity in the isolated respiratory network of mice.
J. Neurosci.
20
:
2994
–3005.
Thollon, C., C. Cambarrat, J. Vian, J.F. Prost, J.L. Peglion, and J.P. Vilaine.
1994
. Electrophysiological effects of S 16257, a novel sino-atrial node modulator, on rabbit and guinea-pig cardiac preparations: comparison with UL-FS 49.
Br. J. Pharmacol.
112
:
37
–42.
Vaccari, T., A. Moroni, M. Rocchi, L. Gorza, M.E. Bianchi, M. Beltrame, and D. DiFrancesco.
1999
. The human gene coding for HCN2, a pacemaker channel of the heart.
Biochim. Biophys. Acta.
1446
:
419
–425.
Vargas, G., and M.T. Lucero.
1999
. Dopamine modulates inwardly rectifying hyperpolarization-activated current (Ih) in cultured rat olfactory receptor neurons.
J. Neurophysiol.
81
:
149
–158.
Viscomi, C., C. Altomare, A. Bucchi, E. Camatini, M. Baruscotti, A. Moroni, and D. DiFrancesco.
2001
. C terminus-mediated control of voltage and cAMP gating of hyperpolarization-activated cyclic nucleotide-gated channels.
J. Biol. Chem.
276
:
29930
–29934.
Wainger, B.J., M. DeGennaro, B. Santoro, S.A. Siegelbaum, and G.R. Tibbs.
2001
. Molecular machanism of cAMP modulation of HCN pacemaker channels.
Nature.
411
:
805
–810.
Wang, J., S. Chen, and S.A. Siegelbaum.
2001
. Regulation of hyperpolarization-activated HCN channel gating and cAMP modulation due to interactions of COOH terminus and core transmembrane regions.
J. Gen. Physiol.
118
:
237
–250.
Wollmuth, L.P.
1995
. Multiple ion binding sites in Ih channels of rod photoreceptors from tiger salamanders.
Pflugers Arch.
430
:
34
–43.
Woodhull, A.M.
1973
. Ionic blockage of sodium channels in nerve.
J. Gen. Physiol.
61
:
687
–708.
Yanagihara, K., and H. Irisawa.
1980
. Inward current activated during hyperpolarization in the rabbit sinoatrial node cell.
Pflugers Arch.
385
:
11
–19.
Zhou, Y., J.H. Morais-Cabral, A. Kaufman, and R. MacKinnon.
2001
. Chemistry of ion coordination and hydration revealed by a K+ channel-Fab complex at 2.0 A resolution.
Nature.
414
:
43
–48.
|
2021-05-07 16:34:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5096328258514404, "perplexity": 6362.881800043371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988796.88/warc/CC-MAIN-20210507150814-20210507180814-00594.warc.gz"}
|
https://math.stackexchange.com/questions/2063656/if-alpha-in-0-1-and-x-n-alpha-x-n-11-alphax-n-2-show-that-the-s
|
# If $\alpha\in (0,1]$ and $x_n=\alpha x_{n-1}+(1-\alpha)x_{n-2}$, show that the sequence $\{x_n\}$ is convergent.
If $x_1$, $x_2$ are arbitary real numbers, $\alpha\in (0,1]$ and $x_n=\alpha x_{n-1}+(1-\alpha)x_{n-2}$ for every positive integer $n$ (>2), show that the sequence $\{x_n\}$ is convergent. (Given $x_1<x_2$)
Edit
I want to prove by using the property below:
If $\{x_{2n}\}$ and $\{x_{2n-1}\}$ converges to same limit $l$ then $\{x_{n}\}$ converges to $l$.
• Hint: consider $x_n-x_{n-1}$ – Mark Bennet Dec 18 '16 at 17:40
Note that $x_{n+2}\in[x_n,x_{n+1}]$ (or $[x_{n+1},x_n]$). Note also that the length of this interval is $$\ell_n=|x_{n+1}-x_n|=|\alpha x_n+(1-\alpha)x_{n-1}-x_n|=(1-\alpha)\ell_{n-1}$$
Since $\ell_n\to 0$, apply nested intervals.
• how to use nested intervals and what will be the result – user1942348 Dec 18 '16 at 19:04
We have $x_n-x_{n-1}=(\alpha-1)(x_{n-1}-x_{n-2})$ so we have $x_n-x_{n-1}=(\alpha-1)^{n-2}(x_2-x_1)$ and hence \begin{aligned}x_n-x_{n-1}&=(\alpha-1)^{n-2}(x_2-x_1)\\\\x_{n-1}-x_{n-2}&=(\alpha-1)^{n-3}(x_2-x_1)\\&\vdots\\x_2-x_1&=(\alpha-1)^0(x_2-x_1)\end{aligned} and summing those we get $x_n-x_1=(x_2-x_1)\frac{1-(\alpha-1)^{n-1}}{2-\alpha}$ which implies that $$\color{red}{\lim\limits_{n\to\infty}x_n=\dfrac{x_2+(1-\alpha)x_1}{2-\alpha}}.$$
• Please check the denominator in $\frac{1-(\alpha-1)^{n-1}}{1-\alpha}$ after summing – user1942348 Dec 18 '16 at 19:14
• Fixed. Thank you for the observation @user1942348 – CIJ Dec 18 '16 at 19:20
$$x_n-x_{n-1}=(-1)(1-\alpha)({x_{n-1}-x_{n-2})}\\=(-1)^2(1-\alpha)^2(x_{n-2}-x_{n-1})\\=\vdots\\=(-1)^n(1-\alpha)^n(x_1-x_0)$$
Now you can complete
• Thanks. I understand. $x_n-x_{n-1}$ depends on $n=$even or odd. How to conclude convergency. Please help. – user1942348 Dec 18 '16 at 18:02
• @user1942348 $(1-\alpha)<1\implies (1-\alpha)^n\to ?$ – Qwerty Dec 18 '16 at 18:11
• It goes to zero. But how to check the subsequences $x_{2n}$ and $x_{2n-1}$ converges to same limit? How to check they are increasing/decreasing and bounded above/below? – user1942348 Dec 18 '16 at 18:15
• @user1942348 You are forgetting the $||$ - the mod value ! – Qwerty Dec 18 '16 at 18:17
• I think you are telling about cauchy ciretia. I want to use the subsequencial criteria to prove it. – user1942348 Dec 18 '16 at 18:21
|
2019-08-19 02:13:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9342091083526611, "perplexity": 445.73356680272127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314638.49/warc/CC-MAIN-20190819011034-20190819033034-00454.warc.gz"}
|
https://elitistreview.com/2007/03/30/favourite-burgundy-grand-cru-results/
|
# Favourite Burgundy Grand Cru – results
Well, the poll is now closed. I’d rather view the mere eighteen votes it generated as an indication of the difficulty of the question, rather than the minimal number of people who visit here; perhaps I am optimistic.
Of course, with so few votes no definite conclusions can be drawn from the results, but I am pleased Musigny got the most votes. La Tache is clearly a serious wine, though, and I don't hold it against anyone for voting for that Grand Cru. Quite a lot of votes for Morey-St.-Denis Grand Crus.
|
2022-10-07 02:22:37
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8202880620956421, "perplexity": 1988.7327865187985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337906.7/warc/CC-MAIN-20221007014029-20221007044029-00672.warc.gz"}
|
https://www.tititudorancea.de/custom_info/d_607006aeb17b48007100ce16.html
|
Welcome to the company ! we have many years of professional experience !
rsdgj@pyzyrsd.com +86 13525609655
# Well Drilling Rig for Sale
Established in 2001, Puyang Zhong Yuan Restar Petroleum Equipment Co.,Ltd, “RSD” for short, is Henan’s high-tech enterprise with intellectual property advantages and independent legal person qualification. With registered capital of RMB 50 million, the Company has two subsidiaries-Henan Restar Separation Equipment Technology Co., Ltd We are mainly specialized in R&D, production and service of various intelligent separation and control systems in oil&gas drilling,engineering environmental protection and mining industries.We always take the lead in Chinese market shares of drilling fluid shale shaker for many years. Our products have been exported more than 20 countries and always extensively praised by customers. We are Class I network supplier of Sinopec,CNPC and CNOOC and registered supplier of ONGC, OIL India,KOC. High quality and international standard products make us gain many Large-scale drilling fluids recycling systems for Saudi Aramco and Gazprom projects.
Certificate of Honor
Customer satisfaction is our first goal!
Phone
+86 13525609655
E-Mail
rsdgj@pyzyrsd.com
Well Drilling Rig for Sale
HOMEWORK 5 SOLUTIONS 1.
1. Let S G, where Gis a group. De ne the ,centralizer, of Sto be C G(S) = fg2Gjgs= sg8s2Sg and the ,normalizer, of Sto be N G(S) = fg2GjgS= Sgg. i),Show, that C G(S) and N G(S) are subgroups of G. ii),Show, that if H G, then HEN G(H) and N G(H) is the largest such ,subgroup, of G, i.e. if HEG0 G, then G0 N G(H). iii),Show, that if H G, then C G(H) EN G(H).
Centralizer and normalizer - Academic Kids
The center of a group is both ,normal, and abelian and has many other important properties as well. We can think of the ,centralizer, of a as the largest (in the sense of inclusion) ,subgroup, H of G having having a in its center, Z(H). A related concept is that ,of the normalizer, of S in G, written as N G (S) or just N(S).
SpellCHEX Dictionary - people.dsv.su.se
This is the ,SpellCHEX dictionary, for online spell checking. [CHEX %PARSER=2.13 %FLOATED=19991204 %GENERATED=DR/ALL %BOUND=TRUE]
Centralizer and normalizer — Wikipedia Republished // WIKI 2
In mathematics, especially group theory, the ,centralizer, (also called commutant) of a subset S in a group G is the set of elements C G ( S ) {\\displaystyle C_{G}(S)} of G such that each member g ∈ C G ( S ) {\\displaystyle g\\in C_{G}(S)} commutes with each element of S, or equivalently, such that conjugation by g {\\displaystyle g} leaves each element of S fixed. The ,normalizer, of S in G ...
Normalizer and Centralizer of a Subgroup of Order 2 ...
21/9/2016, · Let $N_G(H)$ be the ,normalizer, of $H$ in $G$ and $C_G(H)$ be the ,centralizer, of $H$ in $G$. (a) ,Show, that $N_G(H)=C_G(H)$. (b) If $H$ is a ,normal subgroup, of $G$, then ,show, that $H$ is a ,subgroup, of the center $Z(G)$ of $G$.
Center Centralizer Normalizer Subgroup? | Yahoo Answers
25/9/2011, · For any subset A of G, show that Z, (,G) is a subgroup of, C_G, (A) which is a subgroup of N_G (A) which is a subgroup of G. Z (G) is the center, defined as, C_G, (G)., C_G (A), is the centralizer, defined...
Math 817{818 Qualifying Exam
(a) [10 points] ,Show that the centralizer, C G(H) of H in G is a ,normal subgroup of the normalizer, N G(H) of H in G. (b) [10 points] ,Show, that the quotient N G(H)=C G(H) is isomorphic to a ,subgroup, of the automorphism group Aut(H) of H. (3) Let G be a nite group of order p2q with p < q prime numbers. ,Show, that G is not a simple group.
We Still Can't Believe Cadillac Built a 556-hp CTS-V ...
GM’s ,brands, haven’t always been managed well, and Cadillac is no exception. Especially when it comes to unconventional performance models. Admittedly, Cadillac never offered something as unusual as the SSR convertible pickup or HHR SS panel van.But considering American tastes, Cadillac’s ,station wagon, was definitely an outlier. Especially when the Cadillac CTS-V ,Wagon, showed up to take ...
Tesla Model S Gets Turned Into $200000 Station Wagon The ,station wagon, was, however, outfitted with strips of handcrafted chromed steel to add some flair. The bespoke ,Tesla, is complemented with 21-inch turbine wheels, while beige and black leathers ... SpellCHEX Dictionary This is the ,SpellCHEX dictionary, for online spell checking. [CHEX %PARSER=2.13 %FLOATED=19991204 %GENERATED=DR/ALL %BOUND=TRUE] Eclipse Git repositories abs acos acosh addcslashes addslashes aggregate aggregate_info aggregate_methods aggregate_methods_by_list aggregate_methods_by_regexp aggregate_properties aggregate_properties_by Find the Inverse Matrix Using the Cayley-Hamilton Theorem ... 7/11/2016, · More from my site. How to Use the Cayley-Hamilton Theorem to Find the Inverse Matrix Find the inverse matrix of the$3\times 3$matrix $A=\begin{bmatrix} 7 & 2 & -2 \\ -6 &-1 &2 \\ 6 & 2 & -1 \end{bmatrix}$ ,using the Cayley-Hamilton theorem,. Solution. To apply the Cayley-Hamilton theorem, we first determine the characteristic […] Amazon.com: toy station wagon M2 Machines 1957 Chevrolet 210 Beauville ,Station Wagon, (Gloss Black w/Flames) - Wild Cards Release 12 2017 Castline Premium Edition 1:64 Scale Die-Cast Vehicle & … Rolls-Royce Wraith turns into a stylish station wagon by ... By ordering this unusual ,station wagon,, customers have the opportunity to make the interior of the car completely as they want. This particular Silver Specter was commissioned by the owner of the original Rolls-Royce Wraith who wanted to make a ,station wagon, out of it. Find the Inverse Matrix Using the Cayley-Hamilton Theorem ... 7/11/2016, · More from my site. How to Use the Cayley-Hamilton Theorem to Find the Inverse Matrix Find the inverse matrix of the$3\times 3\$ matrix $A=\begin{bmatrix} 7 & 2 & -2 \\ -6 &-1 &2 \\ 6 & 2 & -1 \end{bmatrix}$ ,using the Cayley-Hamilton theorem,. Solution. To apply the Cayley-Hamilton theorem, we first determine the characteristic […]
SpellCHEX Dictionary
This is the ,SpellCHEX dictionary, for online spell checking. [CHEX %PARSER=2.13 %FLOATED=19991204 %GENERATED=DR/ALL %BOUND=TRUE]
Minivan vs. Station Wagon: Which Is Best? | Autobytel.com
5/6/2019, · For decades, the ,station wagon, was the dominant family car in America, thanks to the extra passenger and cargo room it offered compared to a similarly sized …
|
2021-09-26 13:11:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5905455350875854, "perplexity": 6496.447947837894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057861.0/warc/CC-MAIN-20210926114012-20210926144012-00658.warc.gz"}
|
http://mathoverflow.net/questions/126252/symmetric-powers-of-connection
|
# symmetric powers of connection
Let $\nabla: \mathcal{E} \to \mathcal{E} \otimes \Omega^1_X(\log D)$ be a locally free sheaf on some variety $X$ (smooth, projective, over a field of characteristic 0) endowed with a connection having regular singularities along a normal crossing divisor $D$.
(1) How do you put a connection on the symmetric power $\mathrm{Sym}^k \mathcal{E}$?
(2) Does it have the same singularities?
(3) If so, how are the residues of $\nabla$ and of the symmetric power related?
-
add comment
|
2014-03-07 22:12:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5835364460945129, "perplexity": 248.82846081245077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999651166/warc/CC-MAIN-20140305060731-00005-ip-10-183-142-35.ec2.internal.warc.gz"}
|
http://math-doc.ujf-grenoble.fr/cgi-bin/spvol?id=XX
|
Browse by: Author name - Classification - Keywords - Nature
XX: 01, 1-27, LNM 1204 (1986)
KNIGHT, Frank B.
Poisson representation of strict regular step filtrations
Retrieve article from Numdam
XX: 02, 28-29, LNM 1204 (1986)
FAGNOLA, Franco; LETTA, Giorgio
Sur la représentation intégrale des martingales du processus de Poisson (Stochastic calculus, Point processes)
Dellacherie gave in 805 a proof by stochastic calculus of the previsible representation property for the Wiener and Poisson processes. A gap in this proof is filled in 928 for Brownian motion and here for Poisson processes
Keywords: Stochastic integrals, Previsible representation, Poisson processes
Nature: Correction
Retrieve article from Numdam
XX: 03, 30-33, LNM 1204 (1986)
MEYER, Paul-André
Sur l'existence de l'opérateur carré du champ
Retrieve article from Numdam
XX: 04, 34-39, LNM 1204 (1986)
PONTIER, Monique; STRICKER, Christophe; SZPIRGLAS, Jacques
Sur le théorème de représentation par rapport à l'innovation
Retrieve article from Numdam
XX: 05, 40-47, LNM 1204 (1986)
LIN, Cheng-De
Quand l'inégalité de Kunita-Watanabe est-elle une égalité ?
Retrieve article from Numdam
XX: 06, 48-55, LNM 1204 (1986)
PARDOUX, Étienne
Grossissement d'une filtration et retournement du temps d'une diffusion
Retrieve article from Numdam
XX: 07, 56-67, LNM 1204 (1986)
PICARD, Jean
Une classe de processus stable par retournement du temps
Retrieve article from Numdam
XX: 08, 68-80, LNM 1204 (1986)
DOSS, Halim; DOZZI, Marco
Estimation de grandes déviations pour les processus de diffusion à paramètre multidimensionnel
Retrieve article from Numdam
XX: 09, 81-94, LNM 1204 (1986)
MAZZIOTTO, Gérald; MILLET, Annie
Points, lignes et systèmes d'arrêt flous et problème d'arrêt optimal
Retrieve article from Numdam
XX: 10, 95-100, LNM 1204 (1986)
KASPI, Haya; MAISONNEUVE, Bernard
Predictable local times and exit systems
Retrieve article from Numdam
XX: 11, 101-130, LNM 1204 (1986)
NORRIS, James R.
Simplified Malliavin calculus
Retrieve article from Numdam
XX: 12, 131-161, LNM 1204 (1986)
BOULEAU, Nicolas; HIRSCH, Francis
Propriété d'absolue continuité dans les espaces de Dirichlet et applications aux équations différentielles stochastiques (Dirichlet forms, Malliavin's calculus)
This is the main result of the Bouleau-Hirsch approach'' to absolute continuity in Malliavin calculus (see The Malliavin calculus and related topics by D. Nualart, Springer1995). In the framework of Dirichlet spaces, a general criterion for absolute continuity of random vectors is established; it involves the image of the energy measure. This leads to a Lipschitzian functional calculus for the Ornstein-Uhlenbeck Dirichlet form on Wiener space, and gives absolute continuity of the laws of the solutions to some SDE's with coefficients that can be uniformly degenerate
Comment: These results are extended by the same authors in their book Dirichlet Forms and Analysis on Wiener Space, De Gruyter 1991
Keywords: Dirichlet forms, Carré du champ, Absolute continuity of laws
Nature: Original
Retrieve article from Numdam
XX: 13, 162-185, LNM 1204 (1986)
BOULEAU, Nicolas; LAMBERTON, Damien
Théorie de Littlewood-Paley et processus stables (Applications of martingale theory, Markov processes)
Meyer' probabilistic approach to Littlewood-Paley inequalities (1010, 1510) is extended by replacing the underlying Brownian motion with a stable process. The following spectral multiplicator theorem is obtained: If $(P_t)_{t\geq 0}$ is a symmetric Markov semigroup with spectral representation $P_t=\int_{[0,\infty)}e^{-t\lambda} dE_{\lambda}$, and if $M$ is a function on $R_+$ defined by $M(\lambda)=\lambda\int_0^\infty r(y)e^{-y\lambda}dy,$ where $r(y)$ is bounded and Borel on $R_+$, then the operator $T_M=\int_{[0,\infty)}M(\lambda)dE_{\lambda},$ which is obviously bounded on $L^2$, is actually bounded on all $L^p$ spaces of the invariant measure, $1<p<\infty$. The method also leads to new Littlewood-Paley inequalities for semigroups admitting a carré du champ operator
Keywords: Littlewood-Paley theory, Semigroup theory, Riesz transforms, Stable processes, Inequalities, Singular integrals, Carré du champ
Nature: Original
Retrieve article from Numdam
XX: 14, 186-312, LNM 1204 (1986)
MEYER, Paul-André
Élements de probabilités quantiques (chapters I to V)
Retrieve article from Numdam
XX: 15, 313-316, LNM 1204 (1986)
JOURNÉ, Jean-Lin; MEYER, Paul-André
Une martingale d'opérateurs bornés, non représentable en intégrale stochastique
Retrieve article from Numdam
XX: 16, 317-320, LNM 1204 (1986)
PARTHASARATHY, Kalyanapuram Rangachari
A remark on the paper Une martingale d'opérateurs bornés, non représentable en intégrale stochastique'', by J.L. Journé and P.A. Meyer
Retrieve article from Numdam
XX: 17, 321-330, LNM 1204 (1986)
MEYER, Paul-André
Quelques remarques au sujet du calcul stochastique sur l'espace de Fock
Retrieve article from Numdam
XX: 18, 331-333, LNM 1204 (1986)
PARTHASARATHY, Kalyanapuram Rangachari
Some additional remarks on Fock space stochastic calculus
Retrieve article from Numdam
XX: 19, 334-337, LNM 1204 (1986)
MEYER, Paul-André; ZHENG, Wei-An
Sur la construction de certaines diffusions
Retrieve article from Numdam
XX: 20, 338-340, LNM 1204 (1986)
RUIZ DE CHAVEZ, Juan
Sur la positivité de certains opérateurs
Retrieve article from Numdam
XX: 21, 341-348, LNM 1204 (1986)
CARLEN, Eric A.; STROOCK, Daniel W.
An application of the Bakry-Émery criterion to infinite dimensional diffusions
Retrieve article from Numdam
XX: 22, 349-351, LNM 1204 (1986)
YAN, Jia-An
A comparison theorem for semimartingales and its applications
Retrieve article from Numdam
XX: 23, 352-374, LNM 1204 (1986)
HAKIM-DOWEK, M.; LÉPINGLE, Dominique
L'exponentielle stochastique des groupes de Lie (Stochastic differential geometry)
Given a Lie group $G$ and its Lie algebra $\cal G$, this article defines and studies the stochastic exponential of a (continuous) semimartingale $M$ in $\cal G$ as the solution in $G$ to the Stratonovich s.d.e. $dX = X dM$. The inverse operation (stochastic logarithm) is also considered; various formulas are established (e.g. the exponential of $M+N$). When $M$ is a local martingale, $X$ is a martingale for the connection such that $\nabla_A B=0$ for all left-invariant vector fields $A$ and $B$
Comment: See also Karandikar Ann. Prob. 10 (1982) and 1722. For a sequel, see Arnaudon 2612
Keywords: Semimartingales in manifolds, Martingales in manifolds, Lie group
Nature: Original
Retrieve article from Numdam
XX: 24, 375-378, LNM 1204 (1986)
VECHT, D.P. van der
Ultimateness and the Azéma-Yor stopping time
Retrieve article from Numdam
XX: 25, 379-395, LNM 1204 (1986)
NUALART, David
Application du calcul de Malliavin aux équations différentielles stochastiques sur le plan
Retrieve article from Numdam
XX: 26, 396-418, LNM 1204 (1986)
MORROW, Gregory J.; SILVERSTEIN, Martin L.
Two parameter extension of an observation of Poincaré
Retrieve article from Numdam
XX: 27, 419-422, LNM 1204 (1986)
SILVERSTEIN, Martin L.
Orthogonal polynomial martingales on spheres
Retrieve article from Numdam
XX: 28, 423-425, LNM 1204 (1986)
CHUNG, Kai Lai
Remark on the conditional gauge theorem
Retrieve article from Numdam
XX: 29, 426-446, LNM 1204 (1986)
MÉTIVIER, Michel
Quelques problèmes liés aux systèmes infinis de particules et leurs limites
Retrieve article from Numdam
XX: 30, 447-464, LNM 1204 (1986)
LE GALL, Jean-François
Une approche élémentaire des théorèmes de décomposition de Williams
Retrieve article from Numdam
XX: 31, 465-502, LNM 1204 (1986)
McGILL, Paul
Integral representation of martingales in the Brownian excursion filtration (Brownian motion, Stochastic calculus)
An integral representation is obtained of all square integrable martingales in the filtration $({\cal E}^x,\ x\inR)$, where ${\cal E}^x$ denotes the Brownian excursion $\sigma$-field below $x$ introduced by D. Williams 1343, who also showed that every $({\cal E}^x)$ martingale is continuous
Comment: Another filtration $(\tilde{\cal E}^x,\ x\inR)$ of Brownian excursions below $x$ has been proposed by Azéma; the structure of martingales is quite diffferent: they are discontinuous. See Y. Hu's thesis (Paris VI, 1996), and chap.~16 of Yor, Some Aspects of Brownian Motion, Part~II, Birkhäuser, 1997
Keywords: Previsible representation, Martingales, Filtrations
Nature: Original
Retrieve article from Numdam
XX: 32, 503-514, LNM 1204 (1986)
NEVEU, Jacques
Processus ponctuels stationnaires asymptotiquement gaussiens et comportement asymptotique de processus de branchement spatiaux sur-critiques
Retrieve article from Numdam
XX: 33, 515-531, LNM 1204 (1986)
ROSEN, Jay S.
A renormalized local time for multiple intersections of planar Brownian motion (Brownian motion)
Using Fourier techniques, the existence of a renormalized local time for $n$-fold self-intersections of planar Brownian motion is obtained, thus extending the case $n=2$, obtained in the pioneering work of Varadhan (Appendix to Euclidean quantum field theory, by K.~Symanzik, in Local Quantum Theory, Academic Press, 1969)
Comment: Closely related to 2036. A general reference is Le Gall, École d'Été de Saint-Flour XX, Springer LNM 1527
Keywords: Local times, Self-intersection
Nature: Original
Retrieve article from Numdam
XX: 34, 532-542, LNM 1204 (1986)
YOR, Marc
Précisions sur l'existence et la continuité des temps locaux d'intersection du mouvement brownien dans ${\bf R}^2$
Retrieve article from Numdam
XX: 35, 543-552, LNM 1204 (1986)
YOR, Marc
Sur la représentation comme intégrales stochastiques des temps d'occupation du mouvement brownien dans ${\bf R}^d$ (Brownian motion)
Varadhan's renormalization result (Appendix to Euclidean quantum field theory, by K.~Symanzik, in Local Quantum Theory consists in centering certain sequences of Brownian functionals and showing $L^2$-convergence. The same results are obtained here by writing these centered functionals as stochastic integrals
Comment: One of mny applications of stochastic calculus to the existence and regularity of self-intersection local times. See Rosen's papers on this topic in general, and page 196 of Le Gall, École d'Été de Saint-Flour XX, Springer LNM 1527
Keywords: Local times, Self-intersection, Previsible representation
Nature: Original proofs
Retrieve article from Numdam
XX: 36, 553-571, LNM 1204 (1986)
DYNKIN, Eugene B.
Functionals associated with self-intersections of the planar Brownian motion
Retrieve article from Numdam
XX: 37, 572-611, LNM 1204 (1986)
PAGÈS, Gilles
Un théorème de convergence fonctionnelle pour les intégrales stochastiques
Retrieve article from Numdam
XX: 38, 612-613, LNM 1204 (1986)
CHARLOT, François
Sur la démonstration des formules en théorie discrète du potentiel
Retrieve article from Numdam
XX: 39, 614-614, LNM 1204 (1986)
MEYER, Paul-André
Correction au Séminaire XVIII
Retrieve article from Numdam
XX: 40, 614-614, LNM 1204 (1986)
BAKRY, Dominique
Correction au Séminaire XIX
Retrieve article from Numdam
XX: 41, 614-614, LNM 1204 (1986)
MEYER, Paul-André
Correction au Séminaire XIX
Retrieve article from Numdam
XX: 42, 614-614, LNM 1204 (1986)
MEYER, Paul-André
Correction au Séminaire XV
Retrieve article from Numdam
XX: 43, 614-614, LNM 1204 (1986)
MEYER, Paul-André
Correction au Séminaire XVI
Retrieve article from Numdam
|
2013-05-20 19:50:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7055484652519226, "perplexity": 11759.131417239716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699201808/warc/CC-MAIN-20130516101321-00054-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.hpmuseum.org/forum/thread-6287-page-3.html
|
FRAM71B
11-01-2016, 01:42 PM
Post: #41
rprosperi Senior Member Posts: 3,424 Joined: Dec 2013
RE: FRAM71B
(11-01-2016 10:48 AM)Erwin Wrote: When I initialize the PORT 5.05 with 64KB (POKE "2C012"; "109100") ... my last module it is not configured as IRAM when i do the FREE PORT without an error. The SHOWPORT shows type "0" instead of "1", and I couldn't ROMCOPY (error: Device Not Found) So I configured it as ROM (POKE "2C012"; "50D100") and tried the ROMCOPY but with the "verify" error.
I think the issue could be where your 64KB block is located. Although you declare that 64KB FRAM F_Block near the end of your config string, the '71 initialize routine (in the OS) allocates larger memory blocks in lower addresses, regardless of which port they are physically (or in the case of FRAM logically) located.
Most likely, the init routine configured your RAM blocks in this order:
1. Main RAM (5.00)
2. 64KB Block (to hold the image, after converting to IRAM) (5.01)
3. Other 32KB blocks for holding other images. (5.02, 5.03, etc.)
The easiest way to check this is with the MEMBUF program and SHOW PORT commands. In fact for each new config you create, I suggest you run these 2 commands to see the actual memory map; it is not always what you expect it to be.
Run MEMBUF immediately after your initial start with the new config, then after you FREE the various PORTs, and it should be easy to track what is happening.
Also, re-read the article here for a detailed explanation of the memory configuration process.
--Bob Prosperi
11-01-2016, 02:14 PM
Post: #42
Erwin Member Posts: 120 Joined: May 2015
RE: FRAM71B
(11-01-2016 01:42 PM)rprosperi Wrote:
(11-01-2016 10:48 AM)Erwin Wrote: When I initialize the PORT 5.05 with 64KB (POKE "2C012"; "109100") ... my last module it is not configured as IRAM when i do the FREE PORT without an error. The SHOWPORT shows type "0" instead of "1", and I couldn't ROMCOPY (error: Device Not Found) So I configured it as ROM (POKE "2C012"; "50D100") and tried the ROMCOPY but with the "verify" error.
I think the issue could be where your 64KB block is located. Although you declare that 64KB FRAM F_Block near the end of your config string, the '71 initialize routine (in the OS) allocates larger memory blocks in lower addresses, regardless of which port they are physically (or in the case of FRAM logically) located.
Most likely, the init routine configured your RAM blocks in this order:
1. Main RAM (5.00)
2. 64KB Block (to hold the image, after converting to IRAM) (5.01)
3. Other 32KB blocks for holding other images. (5.02, 5.03, etc.)
The easiest way to check this is with the MEMBUF program and SHOW PORT commands. In fact for each new config you create, I suggest you run these 2 commands to see the actual memory map; it is not always what you expect it to be.
Run MEMBUF immediately after your initial start with the new config, then after you FREE the various PORTs, and it should be easy to track what is happening.
Also, re-read the article here for a detailed explanation of the memory configuration process.
Hi,
thank you, I see it in the end of the article so I have to take care to put the larger ROMs first. I understand that the smaller ROMs at the end are not a problem for the system, the 64KB is to big. I'll give this a try next weekend and change the whole configuration and give here feedback about this ...
Thanks Erwin
11-01-2016, 03:39 PM (This post was last modified: 11-02-2016 03:21 PM by Dave Frederickson.)
Post: #43
Dave Frederickson Senior Member Posts: 1,650 Joined: Dec 2013
RE: FRAM71B
(11-01-2016 10:48 AM)Erwin Wrote: When I initialize the PORT 5.05 with 64KB (POKE "2C012"; "109100") ... my last module it is not configured as IRAM when i do the FREE PORT without an error. The SHOWPORT shows type "0" instead of "1", and I couldn't ROMCOPY (error: Device Not Found) So I configured it as ROM (POKE "2C012"; "50D100") and tried the ROMCOPY but with the "verify" error.
This was described in 3. in this post:
http://hpmuseum.org/forum/thread-6287-po...l#pid62269
Understand that ROM's and IRAM's get configured after RAM. When you FREEPORT a module it moves within the address space from RAM to a higher address with the other ROM's and IRAM's. In some max RAM configurations this will cause a configuration warning just like configuring more RAM than can be addressed.
Configuring the module as ROM will, by definition, prevent it from being written.
(11-01-2016 02:14 PM)Erwin Wrote: ... I see it in the end of the article so I have to take care to put the larger ROMs first.
That's not necessary. The 71's configuration routine takes care of that.
That said, I had no issues configuring FRAM71B as above.
Code:
>PEEK$("2C000",32) D3E4D5D69718191A9B50D10000000000 >RUN MEMBUF Port Dev Seq Size Addr Type 0 0 0 4 70000 0 BUILT-IN RAM 0 1 0 4 72000 0 BUILT-IN RAM 0 2 0 4 74000 0 BUILT-IN RAM 0 3 0 4 76000 0 BUILT-IN RAM 5 4 0 128 30000 0 MAIN RAM 0 5 0 16 80000 2 HPILROM 5 0 0 16 88000 2 FTH41 ROM 5 1 0 32 D0000 2 MATHROM 5 2 0 32 C0000 2 JPC ROM 5 3 0 32 B0000 1 FRAM71 TK 5 5 0 64 90000 2 DATACQ ROM > Dave 11-22-2016, 10:53 PM (This post was last modified: 11-22-2016 11:07 PM by physill.) Post: #44 physill Junior Member Posts: 4 Joined: Oct 2015 RE: FRAM71B I hope this is the right place to post this question. If not let me know. Thanks. I am unable to get around an Invalid Arg error in line 120 of SERTOSYS as shown on page 23 of the FRAM71B manual. I am using a PilBox (with the PIL-Box Bridge, ILPer and Video Interface) to attempt to load the file HP-71B_OS2CDCC_ROM.BIN to the FRAM71B to allow use of the 2cdcc operating system. Communication between the HP-71B hardware and the virtual components is working. 10 ! SERTOSYS SERIAL TO SYS 20 DIM A$[64]
30 INPUT "SYSWRT ENABLED? Y/N";V$@ IF V$="N" THEN GOTO 20
40 DISP "START *.DMP UPLOAD"
50 S$="00000" 60 E$="1FFFF"
70 S=HTD(S$) 80 E=HTD(E$)
90 FOR I=S TO E STEP 64
100 ENTER :2 ;A$@ IF I=S+64 THEN DISP "WRITING..." !:2 = 82164A Note that I am using ENTER :4 for the HDRIVE1.DAT file on ILPER 110 I$=DTH$(I) 120 POKE I$,A$130 ! PRINT I$;": ";A$140 NEXT I 150 DISP "DONE" @ BEEP 160 END HP-71B_OS2CDCC_ROM.BIN is a file from one of the many pages I have obtained HP71 info from, perhaps a post on MoHPC. It was part of a link or ZIP file that contains the .BIN files for most if not all of the ROMs for the HP71b. SERTOSYS uses variables I$ and A$to contain a counter and the data read from the 2CDCC hex dump file, respectively. The values that are put into the variables I$ and A$on the first pass through the loop appear to be valid. Using the program HexEdit to view the contents of HP-71B_OS2CDCC_ROM.BIN and comparing to the value placed into variable A$ appears to show that the correct data is in the variable.
Line 120 POKE I$, A$: The HP-71b reference manual (error 11) implies that the correct data type is being used but lies outside the domain of definition of that function (POKE) The HP-IL manual (error 255056) implies that the error message is due to either the argument being out of the allowable range or the argument has a Directory Entry or Length that is improper. I am not really sure what is going on.
I do not have much experience with file operations on the HP71b and have never actually used any HP-IL peripheral hardware which may have educated me on how files are actually manipulated on the hardware.
I would appreciate any help that can be offered. I have not had any issues with the SERTORAM program on page 17 of the FRAM71b manual. Using that program educated me on how the ENTER statement works and I have been able with trial and error get various ROM images loaded onto the FRAM71b.
Thank you for the info on bank switching and other involved topics that I have yet to get to. I find this forum to be most helpful.
Mark W Smith
11-23-2016, 12:34 AM
Post: #45
Dave Frederickson Senior Member Posts: 1,650 Joined: Dec 2013
RE: FRAM71B
(11-22-2016 10:53 PM)physill Wrote: I am unable to get around an Invalid Arg error in line 120 of SERTOSYS as shown on page 23 of the FRAM71B manual. I am using a PilBox (with the PIL-Box Bridge, ILPer and Video Interface) to attempt to load the file HP-71B_OS2CDCC_ROM.BIN to the FRAM71B to allow use of the 2cdcc operating system. Communication between the HP-71B hardware and the virtual components is working.
Hi Mark,
The issue here is that SER2SYS was written for use with the RS-232 interface and the hex dump needs to be in ASCII not binary. HP-71B_OS2CDCC_ROM.BIN, which came from Sylvain's HP-71B Compendium, is intended for use with the Emu71 emulators.
With help from Paul Berger and Bob Prosperi, the utilities in the FRAM71B manual were modified to work with the PIL-Box and hex dumps in the format described in the Emu71/Win manual. Those utilities along with Paul's excellent MEMBUF, can be found in the FRAM71 Tool Kit: http://www.hpmuseum.org/forum/thread-4844.html
The 71B Compendium, http://www.hpmuseum.org/forum/thread-5286.html, has the hex dump in the format to be used with the PIL-Box and Sylvain's loader or the Tool Kit utilities.
Code:
-> HP-71B_SYSTEM_ROM.LIF ILPER: LIF Mass Storage File / FRAM71 -> OS2CDCC ASCII 135K 71B OS Version 2CDCC Memory Dump -> OS2CDCCL BASIC 159 71B OS Version 2CDCC Memory Dump Loader
HTH, Dave
11-24-2016, 05:55 PM
Post: #46
physill Junior Member Posts: 4 Joined: Oct 2015
RE: FRAM71B
Thank you Dave. Your advice was most helpful. Unfortunately, the FRAM71B is acting up since I had it in sysram writing mode when I was trying to configure memory blocks. Was in a hurry and did something dumb.
Mark W Smith
11-24-2016, 06:17 PM
Post: #47
Dave Frederickson Senior Member Posts: 1,650 Joined: Dec 2013
RE: FRAM71B
(11-24-2016 05:55 PM)physill Wrote: Was in a hurry and did something dumb.
Welcome to the FRAM71 Club!
12-30-2016, 12:26 PM (This post was last modified: 03-08-2017 06:45 PM by Erwin.)
Post: #48
Erwin Member Posts: 120 Joined: May 2015
RE: FRAM71B
Hi,
now in the holidays I had time enough to play with the FRAM71B. I made a documentation about my configurations and put it on the DROPBOX for download. I want to give a little back for the great work of the community, especially – Sylvain Cote, Dave Frederickson, Bob Prosperi, Hans Brueggemann, Christoph Giesselink.
One thing I had not solved yet was the implantation of a 64KB ROM - but comes time, comes solution :-)
The document is my personal one to check each single step, maybe its helpful for others. I took care of the writing, but maybe there are some faults in the transcription - feedback is welcome.
Link: FRAM71B_Configuration_Docu
EDIT 2017-01-06
• first correction of typos
• unsolved 64KB DATACQ - workaround with physical ROM
EDIT 2017-03-08
• solved 64KB DATACQ installation, correct DOKU
best regards
Erwin
12-30-2016, 02:49 PM
Post: #49
rprosperi Senior Member Posts: 3,424 Joined: Dec 2013
RE: FRAM71B
(12-30-2016 12:26 PM)Erwin Wrote: Hi,
now in the holidays I had time enough to play with the FRAM71B. I made a documentation about my configurations and put it on the DROPBOX for download. I want to give a little back for the great work of the community, especially – Sylvain Cote, Dave Frederickson, Bob Prosperi, Hans Brueggemann, Christoph Giesselink.
One thing I had not solved yet was the implantation of a 64KB ROM - but comes time, comes solution :-)
The document is my personal one to check each single step, maybe its helpful for others. I took care of the writing, but maybe there are some faults in the transcription - feedback is welcome.
Link: FRAM71B_Configuration_Docu
best regards
Erwin
Quite an impressive document Erwin!
It will take some time to review in detail, but from a quick glance it appears to be very thorough. Your use of SHOWPORT and MEMBUF at each step makes it very clear to see what is happening, and will be very helpful for folks learning to use more advanced features of FRAM71.
Thanks for creating and sharing this beautiful and useful document!
Happy New Year!
--Bob Prosperi
12-30-2016, 06:57 PM
Post: #50
Dave Frederickson Senior Member Posts: 1,650 Joined: Dec 2013
RE: FRAM71B
Very nice, Erwin.
I do see a slight mistake, however. What you describe in paragraphs 5.9 and 6.9 is "conventional" bank switching, not on-the-fly. On-the-fly bank switching:
• Is intended for backup purposes (not switching ROM's)
• By definition, does not require a power cycle
• Is invoked when only a Chip's address nibble is changed
• Requires the use of PEEK$/POKE commands as the file chain is broken See Han's example: http://www.hpmuseum.org/forum/thread-251...l#pid30686 Technical Description: http://www.hpmuseum.org/forum/thread-251...l#pid30720 You should have no issues loading a 64k ROM image but you might need to add the ;ROMSIZE=65536 option to ROMCOPY. Dave 12-30-2016, 07:55 PM Post: #51 Sylvain Cote Senior Member Posts: 1,038 Joined: Dec 2013 RE: FRAM71B Hello Erwin, I fully agree with Robert, impressive document! Thank you for sharing it. Sylvain 01-05-2017, 07:57 PM Post: #52 Erwin Member Posts: 120 Joined: May 2015 RE: FRAM71B (12-30-2016 06:57 PM)Dave Frederickson Wrote: Very nice, Erwin. I do see a slight mistake, however. What you describe in paragraphs 5.9 and 6.9 is "conventional" bank switching, not on-the-fly. You should have no issues loading a 64k ROM image but you might need to add the ;ROMSIZE=65536 option to ROMCOPY. Dave Hi Dave, you are right - it's not on the fly - I will correct this terms in the document. I tried the ROMSIZE option (described in the fact sheet to the ROMSIZE-LEX) in the past but it was not working of me - I have to do some more tests on this. Erwin 01-17-2017, 09:07 AM (This post was last modified: 01-17-2017 09:15 AM by dayd.) Post: #53 dayd Junior Member Posts: 40 Joined: Mar 2016 RE: FRAM71B I got it! My FRAM71B arrived and I played with it a bit I have no ROM modules so what an upgrade! Here’s my initial setup: Code: HP71:1BBBB FTH41:1A EDT:A MATH:1A JPC:F04 HPIL:1B KBD:C SHWP:A ALARM:B Beep:A ULIB:c RCPY:E FRAM71B 512k configuration string: D3E4D5D69798991A1B9C9D0000000000 RAM : 148k (144045 + files 7321 = 151366) IRAM : 96k Lex files, Backups + ROMs F-ROM : 112k T41, Math, JPC H-ROM : 80k (OS + HPIL) OS RAM : 32k (CR/FRAM + Devices + ... at 20000) total : 468k 512 KB space = 468 KB + 12k main ram not filled at 7A000 + 32k at F0000 Mapped FRAM blocks: 11 Module HP 4k : 1 Port Dev Seq Size Addr Type Comment 64 00000 OS 32 20000 CR Port, FRAM config & others 5 6 0 96 30000 0 Main RAM fram 5 7 0 32 60000 0 Main RAM fram 0 0 0 4 70000 0 Main RAM internal 0 1 0 4 72000 0 Main RAM internal 0 2 0 4 74000 0 Main RAM internal 0 3 0 4 76000 0 Main RAM internal 1 0 0 4 78000 0 Main RAM module 0 5 0 16 80000 2 HPIL ROM 5 0 0 16 88000 2 SC T41 ROM 5 5 0 32 90000 1 IRAM testing 5 4 0 32 A0000 1 IRAM testing 5 3 0 32 B0000 1 IRAM Backup & Lex 5 2 0 32 C0000 2 JPC ROM 5 1 0 32 D0000 2 Math ROM 32 E0000 HC T41 ROM F0000 00000000000000000000000000000000 Code: Chip_# Addr. Configuration Description of LCIM Type Size Port ------ ----- ------------- -------------------------- ---- ----- ---- ---- Chip_0 2C000 CONF D Chip_0 2C001 F-BLOCK 3 HC E0000 T41 ROM 1 ROM 32 n/a ------ ----- ------------- -------------------------- ---- ----- ---- ---- Chip_1 2C002 CONF E Chip_1 2C003 F-BLOCK 4 16KB SC T41 ROM 1 ROM 16 5.00 ------ ----- ------------- -------------------------- ---- ----- ---- ---- Chip_2 2C004 CONF D Chip_2 2C005 F-BLOCK 5 32KB Math ROM 1 ROM 32 5.01 ------ ----- ------------- -------------------------- ---- ----- ---- ---- Chip_3 2C006 CONF D Chip_3 2C007 F-BLOCK 6 32KB JPC ROM 1 ROM 32 5.02 ------ ----- ------------- -------------------------- ---- ----- ---- ---- Chip_4 2C008 CONF 9 Chip_4 2C009 F-BLOCK 7 32KB IRAM Backup & Lex 1 IRAM 32 5.03 ------ ----- ------------- -------------------------- ---- ----- ---- ---- Chip_5 2C00A CONF 9 Chip_5 2C00B F-BLOCK 8 32KB Testing IRAM for rom 1 IRAM 32 5.04 ------ ----- ------------- -------------------------- ---- ----- ---- ---- Chip_6 2C00C CONF 9 Chip_6 2C00D F-BLOCK 9 32KB Testing IRAM for rom 1 IRAM 32 5.05 ------ ----- ------------- -------------------------- ---- ----- ---- ---- Chip_7 2C00E CONF 1 Chip_7 2C00F F-BLOCK A Main RAM 96KB 1 of 3 0 RAM 32 5.06 ------ ----- ------------- -------------------------- ---- ----- ---- ---- Chip_8 2C010 CONF 1 Chip_8 2C011 F-BLOCK B Main RAM 2 of 3 0 RAM 32 5.06 ------ ----- ------------- -------------------------- ---- ----- ---- ---- Chip_9 2C012 CONF 9 Chip_9 2C013 F-BLOCK C Main RAM 3 of 3 1 RAM 32 5.06 ------ ----- ------------- -------------------------- ---- ----- ---- ---- Chip_A 2C014 CONF 9 Chip_A 2C015 F-BLOCK D Main RAM 32KB 1 RAM 32 5.07 ------ ----- ------------- -------------------------- ---- ----- ---- ---- Chip_B 2C016 CONF 0 Chip_B 2C017 F-BLOCK 0 (future swappable rom) ------ ----- ------------- -------------------------- ---- ----- ---- ---- Chip_C 2C018 CONF 0 Chip_C 2C019 F-BLOCK 0 (future swappable rom) ------ ----- ------------- -------------------------- ---- ----- ---- ---- Chip_D 2C01A CONF 0 Chip_D 2C01B F-BLOCK 0 (future swappable rom or os) ------ ----- ------------- -------------------------- ---- ----- ---- ---- Chip_E 2C01C CONF 0 Chip_E 2C01D F-BLOCK 0 (future swappable rom or os) ------ ----- ------------- -------------------------- ---- ----- ---- ---- Chip_F 2C01E CONF 0 Chip_F 2C01F F-BLOCK 0 1 F-block left for FRAM and OS config (at F_20000) ------ ----- ------------- -------------------------- ---- ----- ---- ---- I divided the main RAM, to be able to move things around more easily, without losing the content. Backup is not intended to be switched but just to protect some files (memory lost or corruption). What is the F0000 to FFFFF space reserved for? I did same testing with the samples configurations provided by Hans’ manual and when I add one more F block I get a warning. It’s not that I want to, but I’m curious, here some others question that I have: In Erwin’s pdf, sometimes, like in the beginning, IRAM identifiers are not erased by a poke, is there a reason? And my big question; what is the work around to free the ports programmatically? Thank you all for your involvement in the forum, it has been very helpful, André 01-17-2017, 03:01 PM Post: #54 rprosperi Senior Member Posts: 3,424 Joined: Dec 2013 RE: FRAM71B (01-17-2017 09:07 AM)dayd Wrote: What is the F0000 to FFFFF space reserved for? FFC00-FFFFF is reserved for OS configuration. According to the Forth (and F41 I think) Manual, when any HC ROM is present at E0000 (Forth or F41) the F0000-FFC00 block is reserved for a Debugger, which was never released. Similar comments can also be found in the IDS listings, but when IDS was released it appears the debugger was not ready (and then later abandoned) so there are no details. Note that this is NOT the same Debugger that was released later as a set of LEX files that could be loaded into normal RAM. --Bob Prosperi 01-17-2017, 05:07 PM (This post was last modified: 01-17-2017 05:10 PM by J-F Garnier.) Post: #55 J-F Garnier Senior Member Posts: 304 Joined: Dec 2013 RE: FRAM71B (01-17-2017 09:07 AM)dayd Wrote: And my big question; what is the work around to free the ports programmatically? (Warning: advanced information :-) The JPC ROM has provision for executing non-programmable functions, with the EXECUTE command. There is an additional difficulty: after executing CLAIM/FREE ports (that causes system reconfiguration), the current program is ended and the current program file is set to 'workfile'. But is is easy to manage it by adding a RUN command after the CLAIM/FREE to continue the program. For instance, create the TEST program file: 10 ! TEST 20 DISP "Doing EXECUTE..." 30 EXECUTE "FREE PORT(5) @ RUN TEST,40" 40 DISP "Done." J-F 01-17-2017, 09:11 PM Post: #56 Hans Brueggemann Member Posts: 147 Joined: Dec 2013 RE: FRAM71B (01-17-2017 03:01 PM)rprosperi Wrote: Note that this is NOT the same Debugger that was released later as a set of LEX files that could be loaded into normal RAM. ah, thanks! one more mystery solved! 01-22-2017, 04:22 AM Post: #57 dayd Junior Member Posts: 40 Joined: Mar 2016 RE: FRAM71B (01-17-2017 03:01 PM)rprosperi Wrote: According to the Forth (and F41 I think) Manual, when any HC ROM is present at E0000 (Forth or F41) the F0000-FFC00 block is reserved for a Debugger, which was never released. Similar comments can also be found in the IDS listings, but when IDS was released it appears the debugger was not ready (and then later abandoned) so there are no details. Thanks Bob, it’s very nice to have some background history, also. That’s probably because the 71b didn’t sell as the 41. I wonder how much man hours work it took to make an OS as the 71b, may it would explain why they dropped the debugger. (01-17-2017 05:07 PM)J-F Garnier Wrote: (Warning: advanced information :-) Indeed, powerful and reserved for extreme situations! It worked on my case, thank you. The ALARMLEX with SETALARM also execute commands from strings but that’s not as clean as your code. The JPC seems to be a ‘must have’ ROM (at least for me). Sorry to take so long to respond, best regards, André 01-22-2017, 09:34 AM Post: #58 Erwin Member Posts: 120 Joined: May 2015 RE: FRAM71B (01-17-2017 09:07 AM)dayd Wrote: In Erwin’s pdf, sometimes, like in the beginning, IRAM identifiers are not erased by a poke, is there a reason? Hi André, there is no reason - it was my very first full try with an easy proposal from Sylvain and I did it only one time and it was running. But I'll do it a second one and see what's the difference :-) regards Erwin 01-22-2017, 10:18 AM Post: #59 Erwin Member Posts: 120 Joined: May 2015 RE: FRAM71B (11-01-2016 03:39 PM)Dave Frederickson Wrote: Understand that ROM's and IRAM's get configured after RAM. When you FREEPORT a module it moves within the address space from RAM to a higher address with the other ROM's and IRAM's. In some max RAM configurations this will cause a configuration warning just like configuring more RAM than can be addressed. Configuring the module as ROM will, by definition, prevent it from being written. (11-01-2016 02:14 PM)Erwin Wrote: ... I see it in the end of the article so I have to take care to put the larger ROMs first. That's not necessary. The 71's configuration routine takes care of that. That said, I had no issues configuring FRAM71B as above. Code: >PEEK$("2C000",32) D3E4D5D69718191A9B50D10000000000 >RUN MEMBUF Port Dev Seq Size Addr Type 0 0 0 4 70000 0 BUILT-IN RAM 0 1 0 4 72000 0 BUILT-IN RAM 0 2 0 4 74000 0 BUILT-IN RAM 0 3 0 4 76000 0 BUILT-IN RAM 5 4 0 128 30000 0 MAIN RAM 0 5 0 16 80000 2 HPILROM 5 0 0 16 88000 2 FTH41 ROM 5 1 0 32 D0000 2 MATHROM 5 2 0 32 C0000 2 JPC ROM 5 3 0 32 B0000 1 FRAM71 TK 5 5 0 64 90000 2 DATACQ ROM >
Dave
Hi Dave,
The way you did it is running for me in the same way. But to put the DATA-ACQ module in this space it must be configured as IRAM and that is not possible. When I FREEd this space it does it without an error but it resides as RAM. You can try this with POKE "2C012"; "109100" and then try to FREE it. So maybe its not possible for 64k on the "end" of the FRAM? In the address space it is after the MAIN RAM.
I'm confused at the moment about this. No other have such problems solved? I'll try it in another sequence ... next time.
regards Erwin
01-22-2017, 02:02 PM
Post: #60
Erwin Member Posts: 120 Joined: May 2015
RE: FRAM71B
(11-01-2016 09:26 AM)Erwin Wrote: I found the basic program R64KCOPY (chu06) but it is to get the checksum from existing ROMs in a port and needs a LEX-File "TRIM$" from STRINGLX. But in the basic program there is another LEX-file not present (XFN113004) - line 100 till 103 for the checksum - so it ends in an error. Looks like it was saved in the past without the necessary LEX present in the HP71. When the prog could be run I could get the data out of my DATCQ module. Now I found the solution - with the help of Joe Horn's great the O.L.D project to searcher the XFN number and get the function or command: OLD Project It is the 'HTA$' command. So the correction of the program looks like:
Code:
10 ! THIS PROGRAM COMPUTES CHECKSUMS FOR ROMCOPY FOR 64K IRAM 20 INPUT "Port?","5";P 21 P$=":PORT("&STR$(P)&")" 30 DIM A$[96] 31 I=1 40 A$=CAT$(I,P$) @ IF A$="" THEN 100 50 F$=TRIM$(A$[1,8]) @ B$=ADDR$(F$&P$) @ PRINT A$[1,8];" ";B$; 51 O1=HTD(B$) @ IF I=1 THEN O=O1-8 @ PRINT ELSE PRINT USING "X,5D.D";(O1-O)/2 52 IF O1-O>32768 AND O2=0 THEN O2=O1-O 53 IF O1-O>32768*2 AND O3=0 THEN O3=O1-O 60 I=I+1 @ GOTO 40 100 PRINT USING '"csum 1: ",6d,x,8a';30;HTA$(PEEK$(DTH$(O+8),18)) 101 PRINT USING '"csum 2: ",6d,x,8a';O2+22;HTA$(PEEK$(DTH$(O+O2),18)) 102 PRINT USING '"csum 3: ",6d,x,8a';O3+22;HTA$(PEEK$(DTH$(O+O3),18)) 103 PRINT USING '"csum 4: ",6d,x,8a';O1-O+22;HTA$(PEEK$(DTH$(O1),18)) 200 POKE DTH$(O+30),"00" 201 POKE DTH$(O+O2+22),"00" 202 POKE DTH$(O+O3+22),"00" 203 POKE DTH\$(O1+22),"00"
maybe some could use it too - regards Erwin
« Next Oldest | Next Newest »
User(s) browsing this thread: 1 Guest(s)
|
2019-07-23 14:05:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2676765024662018, "perplexity": 3348.24080300166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529406.97/warc/CC-MAIN-20190723130306-20190723152306-00079.warc.gz"}
|
http://jepusto.github.io/clubSandwich/reference/vcovCR.glm.html
|
vcovCR returns a sandwich estimate of the variance-covariance matrix of a set of regression coefficient estimates from an glm object.
## Usage
# S3 method for glm
vcovCR(
obj,
cluster,
type,
target = NULL,
inverse_var = NULL,
form = "sandwich",
...
)
## Arguments
obj
Fitted model for which to calculate the variance-covariance matrix
cluster
Expression or vector indicating which observations belong to the same cluster. Required for glm objects.
type
Character string specifying which small-sample adjustment should be used, with available options "CR0", "CR1", "CR1p", "CR1S", "CR2", or "CR3". See "Details" section of vcovCR for further information.
target
Optional matrix or vector describing the working variance-covariance model used to calculate the CR2 and CR4 adjustment matrices. If a vector, the target matrix is assumed to be diagonal. If not specified, the target is taken to be the estimated variance function.
inverse_var
Optional logical indicating whether the weights used in fitting the model are inverse-variance. If not specified, vcovCR will attempt to infer a value.
form
Controls the form of the returned matrix. The default "sandwich" will return the sandwich variance-covariance matrix. Alternately, setting form = "meat" will return only the meat of the sandwich and setting form = B, where B is a matrix of appropriate dimension, will return the sandwich variance-covariance matrix calculated using B as the bread. form = "estfun" will return the (appropriately scaled) estimating function, the transposed crossproduct of which is equal to the sandwich variance-covariance matrix.
...
Additional arguments available for some classes of objects.
## Value
An object of class c("vcovCR","clubSandwich"), which consists of a matrix of the estimated variance of and covariances between the regression coefficient estimates.
vcovCR
## Examples
if (requireNamespace("geepack", quietly = TRUE)) {
data(dietox, package = "geepack")
dietox$Cu <- as.factor(dietox$Cu)
weight_fit <- glm(Weight ~ Cu * poly(Time, 3), data=dietox, family = "quasipoisson")
V_CR <- vcovCR(weight_fit, cluster = dietox\$Pig, type = "CR2")
coef_test(weight_fit, vcov = V_CR, test = "Satterthwaite")
}
#> Coef. Estimate SE t-stat d.f. (Satt) p-val (Satt) Sig.
#> (Intercept) 4.0124 0.0190 211.193 22.0 <0.001 ***
#> CuCu035 -0.0134 0.0286 -0.469 45.7 0.641
#> CuCu175 0.0330 0.0333 0.993 44.8 0.326
#> poly(Time, 3)1 12.7115 0.2414 52.655 22.0 <0.001 ***
#> poly(Time, 3)2 -1.6810 0.1456 -11.545 22.0 <0.001 ***
#> poly(Time, 3)3 0.0292 0.0566 0.517 21.9 0.611
#> CuCu035:poly(Time, 3)1 -0.0823 0.3120 -0.264 45.6 0.793
#> CuCu175:poly(Time, 3)1 -0.3242 0.3433 -0.944 44.8 0.350
#> CuCu035:poly(Time, 3)2 0.0927 0.2113 0.439 45.6 0.663
#> CuCu175:poly(Time, 3)2 -0.1777 0.1656 -1.073 44.8 0.289
#> CuCu035:poly(Time, 3)3 -0.1010 0.1013 -0.997 45.5 0.324
#> CuCu175:poly(Time, 3)3 0.1146 0.0998 1.149 44.7 0.257
|
2022-08-13 19:12:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6111916899681091, "perplexity": 12678.221964023043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571982.99/warc/CC-MAIN-20220813172349-20220813202349-00457.warc.gz"}
|
http://www.biomedsearch.com/nih/Arterial-pressure-based-cardiac-output/21153399.html
|
Document Detail
Arterial pressure-based cardiac output monitoring: a multicenter validation of the third-generation software in septic patients. Jump to Full Text MedLine Citation: PMID: 21153399 Owner: NLM Status: MEDLINE Abstract/OtherAbstract: PURPOSE: Second-generation FloTrac software has been shown to reliably measure cardiac output (CO) in cardiac surgical patients. However, concerns have been raised regarding its accuracy in vasoplegic states. The aim of the present multicenter study was to investigate the accuracy of the third-generation software in patients with sepsis, particularly when total systemic vascular resistance (TSVR) is low.METHODS: Fifty-eight septic patients were included in this prospective observational study in four university-affiliated ICUs. Reference CO was measured by bolus pulmonary thermodilution (iCO) using 3-5 cold saline boluses. Simultaneously, CO was computed from the arterial pressure curve recorded on a computer using the second-generation (CO(G2)) and third-generation (CO(G3)) FloTrac software. CO was also measured by semi-continuous pulmonary thermodilution (CCO).RESULTS: A total of 401 simultaneous measurements of iCO, CO(G2), CO(G3), and CCO were recorded. The mean (95%CI) biases between CO(G2) and iCO, CO(G3) and iCO, and CCO and iCO were -10 (-15 to -5)% [-0.8 (-1.1 to -0.4) L/min], 0 (-4 to 4)% [0 (-0.3 to 0.3) L/min], and 9 (6-13)% [0.7 (0.5-1.0) L/min], respectively. The percentage errors were 29 (20-37)% for CO(G2), 30 (24-37)% for CO(G3), and 28 (22-34)% for CCO. The difference between iCO and CO(G2) was significantly correlated with TSVR (r(2) = 0.37, p < 0.0001). A very weak (r(2) = 0.05) relationship was also observed for the difference between iCO and CO(G3).CONCLUSIONS: In patients with sepsis, the third-generation FloTrac software is more accurate, as precise, and less influenced by TSVR than the second-generation software. Authors: Daniel De Backer; Gernot Marx; Andrew Tan; Christopher Junker; Marc Van Nuffelen; Lars Hüter; Willy Ching; Frédéric Michard; Jean-Louis Vincent Related Documents : 21097259 - Development of artificial bionic baroreflex system.11788429 - Nonlinear methods of biosignal analysis in assessing terbutaline-induced heart rate and...8854749 - Autonomic interactions and chronotropic control of the heart: heart period versus heart...10562599 - Increase in epinephrine-induced responsiveness during microgravity simulated by head-do...1497569 - Contraction frequency dependence of twitch and diastolic tension in human dilated cardi...21173809 - Evaluation of the appropriate root pressure for maintaining heartbeat during an aortic ...14636929 - Echocardiographic determination of mean pulmonary artery pressure.20337299 - Localized serous retinal detachment of macula as a marker of malignant hypertension.6830419 - Colonic compliance in patients with spinal cord injury. Publication Detail: Type: Journal Article; Multicenter Study; Validation Studies Date: 2010-12-10 Journal Detail: Title: Intensive care medicine Volume: 37 ISSN: 1432-1238 ISO Abbreviation: Intensive Care Med Publication Date: 2011 Feb Date Detail: Created Date: 2011-01-28 Completed Date: 2011-05-03 Revised Date: 2013-07-03 Medline Journal Info: Nlm Unique ID: 7704851 Medline TA: Intensive Care Med Country: United States Other Details: Languages: eng Pagination: 233-40 Citation Subset: IM Affiliation: Department of Intensive Care, Erasme University Hospital, Université Libre de Bruxelles, Route de Lennik 808, 1070 Brussels, Belgium. ddebacke@ulb.ac.be Export Citation: APA/MLA Format Download EndNote Download BibTex MeSH Terms Descriptor/Qualifier: AgedBlood Pressure / physiology*Cardiac Output / physiology*Catheterization, Swan-Ganz*FemaleHumansIntensive Care UnitsMaleMiddle AgedMonitoring, Physiologic / methods*Sepsis / physiopathology*Software*Vasoplegia / physiopathology Comments/Corrections Comment In: Intensive Care Med. 2011 Feb;37(2):183-5 [PMID: 21153398 ]
From MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine
Full Text Journal Information Journal ID (nlm-ta): Intensive Care Med ISSN: 0342-4642 ISSN: 1432-1238 Publisher: Springer-Verlag, Berlin/Heidelberg Article Information Download PDF © The Author(s) 2010 Received Day: 14 Month: 7 Year: 2009 Accepted Day: 29 Month: 9 Year: 2010 Electronic publication date: Day: 10 Month: 12 Year: 2010 pmc-release publication date: Day: 10 Month: 12 Year: 2010 Print publication date: Month: 2 Year: 2011 Volume: 37 Issue: 2 First Page: 233 Last Page: 240 ID: 3028067 PubMed Id: 21153399 Publisher Id: 2098 DOI: 10.1007/s00134-010-2098-8
Arterial pressure-based cardiac output monitoring: a multicenter validation of the third-generation software in septic patients Daniel De Backer1 Address: +32-2-5553380 +32-2-5554698 ddebacke@ulb.ac.be Gernot Marx2 Andrew Tan3 Christopher Junker4 Marc Van Nuffelen1 Lars Hüter5 Willy Ching3 Frédéric Michard6 Jean-Louis Vincent1 1Department of Intensive Care, Erasme University Hospital, Université Libre de Bruxelles, Route de Lennik 808, 1070 Brussels, Belgium 2Department of Surgical Intensive Care, Aachen University Hospital, Pauwelsstrasse, 52074 Aachen Germany 3Intensive Care Unit, Queen’s Medical Center, 1301 Punchbowl Street, Honolulu, HI 96813 USA 4Intensive Care Unit, George Washington University Medical Center, 2300 Eye Street NW, Washington DC, 20037 USA 5Anesthesiology and Intensive Care, Jena University Hospital, Erlanger Allee, Jena, 07747 Germany 6Edwards LifeSciences, Critical Care Europe, Route de l’Etraz 70, 1260 Nyon, Switzerland
Introduction
The computation of stroke volume from an arterial pressure waveform is not a new concept. Proposed for the first time in 1904 [1], stroke volume calculation has been improved and refined by many investigators and companies over the last century. Deriving stroke volume from a peripheral arterial pressure curve is very challenging. Indeed, the arterial pressure waveform depends not only on stroke volume but also on arterial compliance, vascular tone, and reflection waves [2, 3]. Most methods currently available on the market require regular manual calibration to capture differences in arterial compliance and vascular tone from one patient to another and in a given patient from one time to another [2, 3]. Therefore, the accuracy of these techniques is highly dependent on the delay between two manual calibrations and on the hemodynamic stability of the patient [4].
A self-calibrated method has been available on the market since 2005 (FloTrac, Edwards LifeSciences, Irvine, CA). This method was described in detail elsewhere [5]. Briefly, cardiac output (CO) is computed from the equation:
[Formula ID: Equa]
[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{CO }} = {\text{ pulse rate }} \times {\text{ APsd }} \times K,$$\end{document}]
where APsd is the standard deviation of arterial pressure and Κ an autocalibration factor derived from a proprietary multivariate equation. CO is updated every 20 s. This equation includes biometric variables (e.g., age and sex, which, according to the work of Langewouters et al. [6, 7], are known to affect arterial compliance) and “shape variables” describing in mathematical or statistical terms the shape of the arterial pressure curve. The equation was developed from and validated in a human database of arterial pressure tracings and thermodilution CO reference values.
The first-generation software was developed from a limited human database, and the multivariate equation updated only every 10 min. Validation studies were somewhat disappointing [8, 9]. The second-generation software is based on a larger human database, and the multivariate equation (i.e., Κ) is updated every minute (allowing the rapid capture of acute changes in vascular tone). The second-generation software has been shown to be reliable in the measurement of CO, and in the tracking of acute changes in CO but these studies were conducted mostly in cardiac surgery patients [1013]. Some concerns have been raised regarding the second-generation software in patients with hyperdynamic and vasoplegic states [1417]. A relationship (logarithmic) has been established between FloTrac accuracy and systemic vascular resistance (SVR), with FloTrac underestimating CO when SVR is low [15, 16]. To address this limitation, a third-generation software was recently developed from an even larger human database, which contains a greater proportion of hyperdynamic and vasoplegic patients.
The aim of the present multicenter study was to investigate, in a separate cohort of patients than that used to develop the algorithm, whether the third-generation software is able to accurately measure CO in patients with sepsis, particularly when SVR is low.
Materials and methods
The study was conducted in four university-affiliated intensive care units (ICUs) after approval by the ethical committee of each institution. Written informed consent was given by all patients or their legal guardian. Patients with sepsis [18], with a pulmonary artery catheter in place for hemodynamic monitoring, and with a peripheral arterial line in place for continuous blood pressure monitoring were considered for the study. Patients younger than 18 years old or less than 40 kg in weight (as this may limit the use of continuous CO measurement with a pulmonary artery catheter), with significant aortic or tricuspid valve regurgitation, or being treated with an intra-aortic balloon pump were excluded.
Measurements
All patients had a pulmonary artery catheter connected to a specific monitor (Vigilance, Edwards LifeSciences, Irvine, CA, USA) for CO measurement by bolus thermodilution (iCO) and by semi-continuous thermodilution (CCO). Bolus measurements were obtained by using 3–5 cold (<10°C) 10 mL saline boluses (CO-Set, Edwards) randomly injected throughout the respiratory cycle. The temperature of the injectate was measured at the site of injection. The consistency of the thermodilution curve was judged visually on the monitor and all sets of iCO measurements with a reproducibility less than 15% were considered for analysis [19]. CCO was obtained with automated and intermittent heating of filament wire. Five CCO values were averaged over a 5-min period (2 CCO values before and 3 CCO values after iCO measurements). All patients also had a radial (n = 32) or femoral (n = 26) arterial catheter for continuous arterial pressure monitoring. Quality of damping was ensured by visualization of oscillation decay after flushing the lines. The arterial pressure curve was recorded on a computer via a high-fidelity pressure transducer (FloTrac, Edwards Lifesciences) and CO was computed off-line from the arterial pressure curve using the second-generation (COG2, version 1.14) and the third-generation (COG3) FloTrac software. Investigators measuring and collecting iCO values were, therefore, blind to the FloTrac CO measurements. Fifteen COG2 and COG3 values were averaged over a 5-min period (7 values before and 8 values after iCO measurements).
Mean arterial pressure (MAP) was also recorded and total SVR (TSVR) was calculated [central venous pressure (CVP) was not collected] by using the equation:
[Formula ID: Equb]
[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{TSVR }} = {\text{ MAP }} \times { 8}0/{\text{iCO}}$$\end{document}]
where iCO was the reference CO for the study.
Hemodynamic measurements were performed at the discretion of the attending physician, suggested to be at least every 4 h during daytime whereas more measurements could be obtained in case of hemodynamic instability or to evaluate the effects of therapeutic interventions. Patients were followed for 48 h.
The study was sponsored by Edwards Lifesciences. Computation of CO from arterial traces was performed by Edwards Lifesciences’ technicians. The investigators had full control of the database which was locked before analysis. Data analysis was performed by the investigators (DDB). The manuscript was drafted by the first author, and all authors reviewed the manuscript.
Statistical analysis
We used SPSS software (version 13). Results are expressed as mean ± SD. A p < 0.05 was considered significant. Bias was calculated as the mean difference of COG3, COG2, or CCO minus iCO. Limits of agreement (LOA) were calculated as ±2SD of these differences. Bias and LOA are presented as a percentage (absolute values are also provided in brackets) as suggested by Critchley and Critchley [20], because a difference between measurements (bias) of 1 L/min, for example, is more clinically significant at lower than at higher COs. The percentage error was calculated as two times the SD of the bias over the mean iCO, as previously recommended [16, 20]. A percentage error less than 30% was considered as satisfactory [20]. The precision of the reference technique (iCO) was calculated as twice its coefficient of variability of individual bolus injections at each iCO measurement [21]. Differences in bias were evaluated by a Student t test with Bonferroni correction for multiple comparisons. Differences in LOA were evaluated with a Kolmogorov–Smirnov test [22]. The relationships between TSVR and the differences of COG3, COG2, or CCO minus iCO were tested for each method using a logarithmic regression analysis.
As multiple measurements were obtained, two different analyses were performed to limit the influence of multiple measurements per patient. First we used a correction for Bland and Altman analysis for multiple measurements [23, 24]. Second, as all analyses cannot be corrected by this technique, we also present (in the ESM) an analysis conducted using only the first measurement for each patient.
Finally, we evaluated the ability of the different techniques to track changes in CO using pairs of successive CO measurements, with each CO measurement used only once. For this purpose, the direction and the amplitude of changes were evaluated [21]. Receiver operating characteristic (ROC) curves were constructed to evaluate the ability of each technique to detect concordant directional changes in CO of at least 15%, with iCO used as reference.
Results
A total of 58 patients admitted to four ICUs (Honolulu, Hawaii, USA n = 20; Jena, Germany n = 15; Brussels, Belgium n = 14; and Washington DC, USA n = 9) were enrolled in the study. Patient characteristics are summarized in Table 1. Hemodynamic profiles are provided in Table 1 (aggregate of all measurements during the study period) and in Table 1 in the ESM (baseline measurements). A total of 401 (6.9 ± 5.0 per patient) simultaneous determinations of iCO, COG2, COG3, and CCO were available for comparison. The median time between measurements was 238 (percentiles 25 and 75: 158 and 443 min, respectively) min.
Accuracy and precision of CO measurements
Overall, iCO ranged from 2.7 to 14.6 L/min, COG2 from 2.5 to 14.4 L/min, COG3 from 2.5 to 17 L/min, and CCO from 2.8 to 16 L/min. The precision of iCO was 12.4%. The mean bias and LOA were −10 and 29% (2.2 L/min) between COG2 and iCO, 0 and 30% (2.2 L/min) between COG3 and iCO, and 9 and 28% (2.1 L/min) between CCO and iCO (Fig. 1). The bias between COG3 and iCO did not differ from 0 (0.2%, 95%CI −3.7 to 4.2%, p = 0.90) and was significantly less negative than the bias between COG2 and iCO (p < 0.0001). Kolmogorov–Smirnov testing showed significant differences in the LOA between COG3 and COG2 but not between COG3 and CCO (p < 0.001 and 0.59, respectively). LOA were similar when COG3 was compared with iCO and CCO (27.2 ± 6.2% for COG3, other data not shown). Similar results were observed when a correction for multiple measurements per patient was not applied (Table 2).
When only the first measurement of CO obtained at inclusion of the patient was considered, similar bias and LOA were observed (Table 2, and Fig. 1 in the ESM; hemodynamic data are presented in Table 1 in the ESM). The bias between COG3 and iCO did not differ from 0 (−0.2%, 95%CI −4 to +4, p = 0.90) and was significantly less negative than the bias between COG2 and iCO (p < 0.0001). Kolmogorov–Smirnov testing showed significant differences in LOA between COG3 and COG2 and between COG3 and CCO (p < 0.001 and <0.01, respectively).
Influence of TSVR on the accuracy of CO measurements
The difference between iCO and COG2 was significantly correlated with TSVR (r2 = 0.37, p < 0.0001) (Fig. 2). A very weak (r2 = 0.05) but still statistically significant (p < 0.0001) relationship was also observed for the difference between iCO and COG3 (Fig. 2). The difference between iCO and CCO was not correlated with TSVR (Fig. 2). When only one point per patient was considered, there was no relationship between TSVR and the differences between iCO and COG3 or iCO and CCO, whereas a significant relationship was observed between TSVR and the difference between iCO and COG2 (Fig. 2 in the ESM).
Separating datapoints according to the median TSVR value or even to extreme quartiles of TSVR showed that the percentage errors for COG3 were similar for all TSVR ranges explored (Table 2 in the ESM).
Detection of changes in CO measurements
The AUC of the ROC curves for detecting changes in iCO of more than 15% in the same direction were 0.79 (0.72–0.87) for COG3, 0.78 (0.69–0.86) for COG2, 0.75 (0.66–0.84) for CCO, and 0.51 (0.42–0.62) for MAP (Fig. 3 in the ESM). The AUCs for COG3, COG2, and CCO were significantly higher than 0.5 and than MAP (p < 0.001) but there were no differences among the different techniques. Of note, MAP was unable to detect changes in CO in the same direction (p = 0.80). The sensitivity, specificity, positive and negative predictive values, positive and negative likelihood ratios, and percentage of correct classification (with 95%CI) were, respectively, 0.78 (0.60–0.89), 0.78 (0.69–0.84), 0.50 (0.36–0.65), 0.93 (0.85–0.96), 3.5 (2.4–5.1), 0.3 (0.2–0.5), and 0.78 (0.61–0.90) for G3; 0.76 (0.57–0.88), 0.75 (0.66–0.82), 0.44 (0.31–0.57), 0.92 (0.85–0.96), 2.9 (2.1–4.3), 0.3 (0.2–0.6), and 0.75 (0.63–0.84) for G2; and 0.72 (0.55–0.85), 0.72 (0.64–0.80), 0.43 (0.30–0.56), 0.90 (0.83–0.95), 2.6 (1.9–3.7), 0.4 (0.2–0.6), and 0.73 (0.59–0.80) for CCO.
There were no differences in the magnitude of changes in CO detected by iCO and by the other techniques (Table 3 in the ESM).
Influence of radial versus femoral line site on the accuracy of CO measurements
The bias (−3.5 vs. −1.8%) and percentage error (30.4 vs. 26.6%) between COG3 and iCO were slightly greater for radial than for femoral sites, but these differences were not significant (ns). Similar trends were observed for bias (−13.4 and −11.6) and percentage errors (33.6 vs. 30.9%) between COG2 and iCO in radial and femoral sites (p = ns).
The difference between iCO and COG3 was significantly correlated with TSVR at the radial site (r2 = 0.13, p < 0.0001) but not at the femoral site (r2 = 0.01, p = 0.74). The difference between iCO and COG2 was significantly correlated with TSVR at both sites, but the correlation was greater at the radial (r2 = 0.51, p < 0.0001) than at the femoral (r2 = 0.19, p < 0.001) site.
Discussion
Our study shows that the third-generation FloTrac software is more accurate (lower bias), as precise (%error < 30), and much less affected by SVR than the second-generation software.
Several studies have shown that the second-generation FloTrac is accurate (bias ranging between −0.30 to +0.19 L/min) and precise (%error < 30) compared with thermodilution techniques [1013]. However, other studies showed that the second-generation FloTrac may underestimate CO in hyperdynamic and vasoplegic states [1417]. In patients undergoing liver transplantation, Biais et al. [15] were the first to report a (logarithmic) relationship between FloTrac accuracy and SVR, such that the lower the SVR, the greater the bias between the FloTrac value and the reference thermodilution value. These findings were recently confirmed by Biancofiore et al. [16] in the same clinical setting. Our results obtained in septic patients are in line with these findings in that we also observed a large bias between COG2 and iCO and a logarithmic relationship between TSVR and COG2.
It has been hypothesized that significant gradients between central and peripheral arterial pulse pressures may be responsible for the underestimation of CO with second-generation FloTrac software in patients with low SVR. Indeed, such a gradient, likely related to a significant decrease in peripheral reflection waves, has been reported in vasoplegic and/or septic states and was shown to be reversible as soon as patients recovered [2529]. Logically, this “decoupling” of aortic and radial pulse pressure may be responsible for an underestimation of CO when it is computed from a peripheral arterial pressure curve. In this regard, it is important to note that all pulse contour methods should be affected by this physiological phenomenon. Results from several studies using different pulse contour methods are consistent with this hypothesis. When comparing a pulse contour method to pulmonary thermodilution in patients with septic shock, Jellema et al. [30] showed that the limits of agreement were almost two times wider in patients with low SVR (<800 dyn/(s cm5)) than in other patients (SVR > 800 dyn/(s cm5)). More recently, in cardiac surgical patients, Yamashita et al. [31] showed that when SVR was decreased dose-dependently by prostaglandin E1 infusion, the PiCCO pulse contour method underestimated CO by up to 40% compared with the reference thermodilution method. The same authors observed similar influences of vasodilation on bias and precision between the LiDCO pulse contour method and thermodilution [32]. Another group recently reported that differences between femoral and radial arterial traces may develop during patient course and generate differences in CO up to 3.0 L/min [29]. Although we strongly suspect that a decoupling between central and peripheral pulse pressure may be responsible for the lack of accuracy of pulse contour methods in vasoplegic states, evidence to support this hypothesis is still lacking and our study was not designed to address this issue.
Consistent with this hypothesis, we nevertheless found that bias and percentage error tended to be higher in the radial than the femoral site for both COG2 and COG3. This was not related to factors specific to these patients, as bias and percentage errors between CCO and iCO did not differ between patients equipped with radial or femoral lines. Even though the percentage error with COG3 using a radial line was still acceptable in these septic patients, femoral arterial lines should be preferred in these patients, whenever feasible, for CO monitoring and for arterial pressure measurements.
The new third-generation FloTrac software was developed from a human database containing many recordings from septic and liver transplant patients who were often vasodilated. Some proprietary arterial pressure waveform characteristics were identified in this subset of patients and integrated as new “shape variables” in the multivariate equation used to calculate and update Κ every minute. As a result, precision has been preserved and accuracy has been improved and is now at least as good as that of semi-continuous pulmonary thermodilution (CCO), a method widely used and accepted in critically ill patients.
Our study has some limitations. First, the ideal reference method for measuring CO has not been described. However, the most commonly used technique is an averaged set of bolus thermodilution values taken from a pulmonary artery catheter, reported to have a precision of 10–20% [19, 20]. Ideally, we should have used a highly reliable reference standard to make comparisons, such as an aortic flow probe applied directly to the aorta, but this is obviously not possible in ICU patients. However, we did optimize the precision of our thermodilution measurements by using sets of measurements with a reproducibility less than 15% [19] and using bolus thermodilution measurements as a reference. Moreover, we report similar percentage error when CCO was used as an alternative reference technique. Second, we studied patients with sepsis-induced vasoplegia, most of them receiving vasoactive support, with low TSVR as intended. Whether our findings may be extrapolated to patients with drug-induced (e.g., during anesthesia induction) or ischemia/reperfusion-induced (e.g., post liver reperfusion during transplantation surgery) vasoplegia remains to be determined. One may argue that changing the algorithm could decrease the ability of the software to measure CO in normal or high TSVR states. Our study was not specifically designed to test this hypothesis, but several measurements were obtained later in the patients’ course when SVR had normalized. In these conditions, the bias was minimal with COG3, as with previous versions, suggesting that the software also performs adequately in high and normal TSVR states. This should be confirmed in another prospective trial. Third, TSVR was computed without CVP, inducing an overestimation of SVR of around 15%. Of note, TSVR provides a better estimate of arterial compliance, a key determinant of the relationship between stroke volume and arterial waveform, than SVR as CVP is not included in this relationship. Finally, we used both radial and femoral arterial lines. Even though these were mixed in the main analysis, a separate analysis allowed us to look at the specific behavior of each catheterization site, as reported above.
In conclusion, our study shows that in patients with sepsis, the third-generation FloTrac software is more accurate, as precise, and influenced much less by TSVR than the second-generation software. Our data also demonstrate that the overall performance of the third-generation FloTrac is comparable to semi-continuous pulmonary thermodilution, a technique already widely used in septic patients.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Notes
Open Access
References
1.. Erlanger J,Hooker DR. An experimental study of blood pressure and of pulse-pressure in manJohn Hopkins Hosp RepYear: 190412145378 2.. Lieshout JJ,Wesseling KH. Continuous cardiac output by pulse contour analysis?Br J AnaesthYear: 20018646746910.1093/bja/86.4.46711573617 3.. Michard F. Pulse contour analysis: fairy tale or new reality?Crit Care MedYear: 2007351791179210.1097/01.CCM.0000269351.38762.B917581371 4.. Hamzaoui O,Monnet X,Richard C,Osman D,Chemla D,Teboul JL. Effects of changes in vascular tone on the agreement between pulse contour and transpulmonary thermodilution cardiac output measurements within an up to 6-hour calibration-free periodCrit Care MedYear: 20083643444010.1097/01.CCM.OB013E318161FEC418091547 5.. Pratt B,Roteliuk L,Hatib F,Frazier J,Wallen RD. Calculating arterial pressure-based cardiac output using a novel measurement and analysis methodBiomed Instrum TechnolYear: 20074140341110.2345/0899-8205(2007)41[403:CAPCOU]2.0.CO;217992808 6.. Langewouters GJ,Wesseling KH,Goedhard WJ. The pressure dependent dynamic elasticity of 35 thoracic and 16 abdominal human aortas in vitro described by a five component modelJ BiomechYear: 19851861362010.1016/0021-9290(85)90015-64055815 7.. Langewouters GJ,Wesseling KH,Goedhard WJ. The static elastic properties of 45 human thoracic and 20 abdominal aortas in vitro and the parameters of a new modelJ BiomechYear: 19841742543510.1016/0021-9290(84)90034-46480618 8.. Sander M,Spies CD,Grubitzsch H,Foer A,Muller M,Heymann C. Comparison of uncalibrated arterial waveform analysis in cardiac surgery patients with thermodilution cardiac output measurementsCrit CareYear: 200610R16410.1186/cc510317118186 9.. Mayer J,Boldt J,Schollhorn T,Rohm KD,Mengistu AM,Suttner S. Semi-invasive monitoring of cardiac output by a new device using arterial pressure waveform analysis: a comparison with intermittent pulmonary artery thermodilution in patients undergoing cardiac surgeryBr J AnaesthYear: 20079817618210.1093/bja/ael34117218375 10.. Mehta Y,Chand RK,Sawhney R,Bhise M,Singh A,Trehan N. Cardiac output monitoring: comparison of a new arterial pressure waveform analysis to the bolus thermodilution technique in patients undergoing off-pump coronary artery bypass surgeryJ Cardiothorac Vasc AnesthYear: 20082239439910.1053/j.jvca.2008.02.01518503927 11.. Prasser C,Bele S,Keyl C,Schweiger S,Trabold B,Amann M,Welnhofer J,Wiesenack C. Evaluation of a new arterial pressure-based cardiac output device requiring no external calibrationBMC AnesthesiolYear: 20077910.1186/1471-2253-7-917996086 12.. Mayer J, Boldt J, Wolf MW, Lang J, Suttner S (2008) Cardiac output derived from arterial pressure waveform analysis in patients undergoing cardiac surgery: validity of a second generation device. Anesth Analg 106:867–872, table 13.. Senn A,Button D,Zollinger A,Hofer CK. Assessment of cardiac output changes using a modified FloTrac/Vigileo algorithm in cardiac surgery patientsCrit CareYear: 200913R3210.1186/cc773919261180 14.. Sakka SG,Kozieras J,Thuemer O,Hout N. Measurement of cardiac output: a comparison between transpulmonary thermodilution and uncalibrated pulse contour analysisBr J AnaesthYear: 20079933734210.1093/bja/aem17717611251 15.. Biais M,Nouette-Gaulain K,Cottenceau V,Vallet A,Cochard JF,Revel P,Sztark F. Cardiac output measurement in patients undergoing liver transplantation: pulmonary artery catheter versus uncalibrated arterial pressure waveform analysisAnesth AnalgYear: 20081061480148610.1213/ane.0b013e318168b30918420863 16.. Biancofiore G,Critchley LA,Lee A,Bindi L,Bisa M,Esposito M,Meacci L,Mozzo R,DeSimone P,Urbani L,Filipponi F. Evaluation of an uncalibrated arterial pulse contour cardiac output monitoring system in cirrhotic patients undergoing liver surgeryBr J AnaesthYear: 2009102475410.1093/bja/aen34319059920 17.. Della Rocca G,Costa MG,Chiarandini P,Bertossi G,Lugano M,Pompei L,Coccia C,Sainz-Barriga M,Pietropaoli P. Arterial pulse cardiac output agreement with thermodilution in patients in hyperdynamic conditionsJ Cardiothorac Vasc AnesthYear: 20082268168710.1053/j.jvca.2008.02.02118922423 18.. Levy MM,Fink MP,Marshall JC,Abraham E,Angus D,Cook D,Cohen J,Opal SM,Vincent JL,Ramsay G. 2001 SCCM/ESICM/ACCP/ATS/SIS International Sepsis Definitions ConferenceCrit Care MedYear: 2003311250125610.1097/01.CCM.0000050454.01978.3B12682500 19.. Stetz CW,Miller RG,Kelly GE,Raffin TA. Reliability of the thermodilution method in the determination of cardiac output in clinical practiceAm Rev Respir DisYear: 1982126100110046758640 20.. Critchley LA,Critchley JA. A meta-analysis of studies using bias and precision statistics to compare cardiac output measurement techniquesJ Clin Monit ComputYear: 199915859110.1023/A:100998261138612578081 21.. Squara P,Cecconi M,Rhodes A,Singer M,Chiche JD. Tracking changes in cardiac output: methodological considerations for the validation of monitoring devicesIntensive Care MedYear: 2009351801180810.1007/s00134-009-1570-919593546 22.. Sun JX,Reisner AT,Saeed M,Heldt T,Mark RG. The cardiac output from blood pressure algorithms trialCrit Care MedYear: 200937728010.1097/CCM.0b013e318193017419112280 23.. Bland JM,Altman DG. Measuring agreement in method comparison studiesStat Methods Med ResYear: 1999813516010.1191/09622809967381927210501650 24.. Myles PS,Cui J. Using the Bland-Altman method to measure agreement with repeated measuresBr J AnaesthYear: 20079930931110.1093/bja/aem21417702826 25.. Stern DH,Gerson JI,Allen FB,Parker FB. Can we trust the direct radial artery pressure immediately following cardiopulmonary bypass?AnesthesiologyYear: 19856255756110.1097/00000542-198505000-000023994020 26.. Bilo HJ,Strack van Schijndel RJ,Schreuder WO,Groeneveld AB,Thijs LG. Decreased reflection coefficient as a possible cause of low blood pressure in severe septicaemiaIntensive Care MedYear: 19891513713910.1007/BF002959942715504 27.. Hynson JM,Katz JA,Mangano DT. On the accuracy of intra-arterial pressure measurement: the pressure gradient effectCrit Care MedYear: 1998261623162410.1097/00003246-199810000-000039781710 28.. Dorman T,Breslow MJ,Lipsett PA,Rosenberg JM,Balser JR,Almog Y,Rosenfeld BA. Radial artery pressure monitoring underestimates central arterial pressure during vasopressor therapy in critically ill surgical patientsCrit Care MedYear: 1998261646164910.1097/00003246-199810000-000149781720 29.. Smith J,Camporota L,Beale R. Vincent JLMonitoring arterial blood pressure and cardiac output using central or peripheral arterial pressure waveforms2009 Yearbook of intensive care and emergency medicineYear: 2009HeidelbergSpringer285296 30.. Jellema WT,Wesseling KH,Groeneveld AB,Stoutenbeek CP,Thijs LG,Lieshout JJ. Continuous cardiac output in septic shock by simulating a model of the aortic input impedance: a comparison with bolus injection thermodilutionAnesthesiologyYear: 1999901317132810.1097/00000542-199905000-0001610319780 31.. Yamashita K,Nishiyama T,Yokoyama T,Abe H,Manabe M. The effects of vasodilation on cardiac output measured by PiCCOJ Cardiothorac Vasc AnesthYear: 20082268869210.1053/j.jvca.2008.04.00718922424 32.. Yamashita K,Nishiyama T,Yokoyama T,Abe H,Manabe M. Effects of vasodilation on cardiac output measured by PulseCOJ Clin Monit ComputYear: 20072133533910.1007/s10877-007-9093-917896183
Figures
[Figure ID: Fig1] Fig. 1 Bland & Altman representations of COG2, COG3, and CCO versus iCO. Panel a shows relation between COG2 and iCO, panel b COG3 and iCO, and panel c CCO and iCO. The bias and limits of agreements, computed with correction for multiple measurements [23, 24], are provided in Table 2 [Figure ID: Fig2] Fig. 2 Logarithmic relationships between total systemic vascular resistance (TSVR) and the differences between COG2 and iCO, COG3 and iCO, and CCO and iCO using all patient data. Panel a shows relation between COG2 and iCO, panel b COG3 and iCO, and panel c CCO and iCO
Tables
[TableWrap ID: Tab1] Table 1
Patient characteristics
Age (years) 62 ± 14 Sex (M/F) 40/18 Weight (kg) 81 ± 24 Height (cm) 169 ± 12 Body surface area (m2) 1.88 ± 0.25 Body mass index 28 ± 8 History of cardiac disease 25 (43%) History of vascular disease 4 (7%) Patient type Medical 41 (71%) Surgical (post-op.) 13 (22%) Trauma 4 (7%) Vasoactive support Norepinephrine 39 (67%) Vasopressin 9 (16%) Dopamine 1 (2%) Inotropic support Dobutamine 16 (28%) Milrinone 7 (12%) Levosimendan 5 (9%) Mechanical ventilation 51 (88%) Hemodynamic profile (aggregate measurements over the entire study period) Heart rate (bpm) 97 ± 19 Mean arterial pressure (mmHg) 76 ± 11 iCO (L/min) 7.5 ± 2.0 COG2 (L/min) 6.5 ± 1.5 COG3 (L/min) 7.3 ± 2.1 CCO (L/min) 8.1 ± 2.1 TSVR (dyn/(s cm5)) 875 ± 283
TSVR total systemic vascular resistance; iCO cardiac output measured by bolus thermodilution; COG2 cardiac output measured by second-generation FloTrac; COG3 cardiac output measured by third-generation FloTrac; CCO cardiac output measured by semi-continuous thermodilution
[TableWrap ID: Tab2] Table 2
Mean bias and limits of agreements (95% confidence interval)
Bias % (95%CI) Percentage error % (95%CI)
All measurements (58 patients/401 measurements), without correction
G3 − iCO −2.6 (−4.1 to −1.2*,†) 29.2 (25.2–34.2)
G2 − iCO −12.4 (−14.0 to −10.8) 32.5 (26.3–36.7)
CCO − iCO 8.2 (7.0–9.5)* 25.4 (21.3–29.5)
All measurements (58 patients/401 measurements), with correction [23, 24]
G3 − iCO 0.2 (−3.7 to 4.2)*,† 30.4 (23.6–37.2)
G2 − iCO −10.3 (−15.4 to −5.3) 28.6 (20–37.2)
CCO − iCO 9.5 (5.8–13.1)* 28.0 (21.8–34.2)
First measurement only (58 patients/58 measurements)
G3 − iCO −2.6 (−6.4 to −1.1)*,† 29.2 (22.7–35.7)
G2 − iCO −12.4 (−16.6 to −8.2) 32.8 (27.5–38.1)
CCO − iCO 8.2 (5.0–11.5)* 25.6 (19.9–31.3)
iCO cardiac output measured by bolus thermodilution; COG2 cardiac output measured by second-generation FloTrac; COG3 cardiac output measured by third-generation FloTrac; CCO cardiac output measured by semi-continuous thermodilution
p < 0.05 versus G2, p < 0.05 versus CCO
Article Categories:Original Keywords: Keywords Cardiac output, Monitoring, Non-invasive, Systemic vascular resistance.
Previous Document: Dissection of QTL effects for root traits using a chromosome arm-specific mapping population in brea...
Next Document: Interpretation of blood pressure signal: physiological bases, clinical relevance, and objectives dur...
|
2014-04-17 04:49:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3720988631248474, "perplexity": 14108.376134528904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://www.researchgate.net/scientific-contributions/Santiago-Quintero-2172759970?_sg=v-GWzQSa3xYkA6f5cXye5HpcLoMBY0Thnrde_myrkovkJOka1FN_9HlNluFDTBXEmZLC7riGokHoPBA
|
# Santiago Quintero's research while affiliated with École Polytechnique and other places
## Publications (11)
Article
We describe a model for polarization in multi-agent systems based on Esteban and Ray's standard family of polarization measures from economics. Agents evolve by updating their beliefs (opinions) based on an underlying influence graph, as in the standard DeGroot model for social learning, but under a confirmation bias; i.e., a discounting of opinion...
Preprint
Full-text available
Structures involving a lattice and join-endomorphisms on it are ubiquitous in computer science. We study the cardinality of the set $\mathcal{E}(L)$ of all join-endomorphisms of a given finite lattice $L$. In particular, we show for $\mathbf{M}_n$, the discrete order of $n$ elements extended with top and bottom, $| \mathcal{E}(\mathbf{M}_n) | =n!\m... Preprint Full-text available Let$L$be a distributive lattice and$\mathcal{E}(L)$be the set of join endomorphisms of$L$. We consider the problem of finding$f \sqcap_{{\scriptsize \mathcal{E}(L)}} g$given$L$and$f,g\in \mathcal{E}(L)$as inputs. (1) We show that it can be solved in time$O(n)$where$n=| L |$. The previous upper bound was$O(n^2)\$. (2) We characterize t...
Preprint
We describe a model for polarization in multi-agent systems based on Esteban and Ray's standard measure of polarization from economics. Agents evolve by updating their beliefs (opinions) based on an underlying influence graph, as in the standard DeGroot model for social learning, but under a confirmation bias; i.e., a discounting of opinions of age...
Chapter
Let L be a distributive lattice and E(L) be the set of join endomorphisms of L. We consider the problem of finding f⊓E(L)g given L and f,g∈E(L) as inputs. (1) We show that it can be solved in time O(n) where n=|L|. The previous upper bound was O(n2). (2) We characterize the standard notion of distributed knowledge of a group as the greatest lower b...
Chapter
We describe a model for polarization in multi-agent systems based on Esteban and Ray’s standard measure of polarization from economics. Agents evolve by updating their beliefs (opinions) based on an underlying influence graph, as in the standard DeGroot model for social learning, but under a confirmation bias; i.e., a discounting of opinions of age...
Preprint
Full-text available
We describe a model for polarization in multi-agent systems based on Esteban and Ray's standard measure of polarization from economics. Agents evolve by updating their beliefs (opinions) based on an underlying influence graph, as in the standard DeGroot model for social learning, but under a confirmation bias; i.e., a discounting of opinions of age...
Article
Spatial constraint systems (scs) are semantic structures for reasoning about spatial and epistemic information in concurrent systems. We develop the theory of scs to reason about the distributed information of potentially infinite groups. We characterize the notion of distributed information of a group of agents as the infimum of the set of join-pr...
Preprint
Full-text available
We describe a model for polarization in multi-agent systems based on Esteban and Ray's classic measure of polarization from economics. Agents evolve by updating their beliefs (opinions) based on the beliefs of others and an underlying influence graph. We show that polarization eventually disappears (converges to zero) if the influence graph is stro...
Preprint
Full-text available
Spatial constraint systems (scs) are semantic structures for reasoning about spatial and epistemic information in concurrent systems. We develop the theory of scs to reason about the distributed information of potentially infinite groups. We characterize the notion of distributed information of a group of agents as the infimum of the set of join-pr...
Chapter
Structures involving a lattice and join-endomorphisms on it are ubiquitous in computer science. We study the cardinality of the set $${\mathcal {E}}(L)$$ of all join-endomorphisms of a given finite lattice $$L$$. In particular, we show that when $$L$$ is $$\mathbf {M}_n$$, the discrete order of n elements extended with top and bottom, \(| {\mathcal...
## Citations
... Our goal when implementing nudges as interventions should be to encourage and imply. Providing information, in the form of nudges, which heavily support the dissuade from one side, may cause heavy strengthening of preexisting radical beliefs due to the weight carried by the confirmation bias in polarized individuals [6], [7]. Instead of providing propaganda/counter-propaganda, we need to merely confuse or provoke thought amongst our targets. ...
... Their work explores the dependence of knowledge in a distributed system on the way processes communicate with one another. Guzmán et al. [23] introduce the theory of group space functions to reason about the information distributed among the members of a potentially infinite group. They develop the semantic foundations and algorithms to reason about distributed knowledge in multi-agent systems and analyze the properties of distributed spaces for reasoning about the distributed knowledge of such systems. ...
... using equation (16). Then, for f , g cotight, we have ...
|
2023-03-21 06:07:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4916592240333557, "perplexity": 1381.7120625026016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943625.81/warc/CC-MAIN-20230321033306-20230321063306-00615.warc.gz"}
|
https://www.isr-publications.com/jnsa/articles-1863-on-nabla-distance-and-fixed-point-theorems-in-generalized-partially-ordered-d-metric-spaces
|
# On $\nabla^{**}$-distance and fixed point theorems in generalized partially ordered $D^*$-metric spaces
Volume 8, Issue 1, pp 46--54
• 1169 Views
### Authors
Alaa Mahmood AL. Jumaili - School of Mathematics and Statistics, Huazhong University of Science and Technology Wuhan city, Hubei province, Post. No. 430074, China. Xiao Song Yang - School of Mathematics and Statistics, Huazhong University of Science and Technology Wuhan city, Hubei province, Post. No. 430074, China.
### Abstract
In this paper, we introduce a new concept on a complete generalized $D^*$-metric space by using the concept of generalized $D^*$-metric space ($D^*$-cone metric space) called $\nabla^{**}$-distance and, by using the concept of the $\nabla^{**}$-distance we prove some new fixed point theorems in complete partially ordered generalized $D^*$-metric space which is the main result of our paper.
### Share and Cite
##### ISRP Style
Alaa Mahmood AL. Jumaili, Xiao Song Yang, On $\nabla^{**}$-distance and fixed point theorems in generalized partially ordered $D^*$-metric spaces, Journal of Nonlinear Sciences and Applications, 8 (2015), no. 1, 46--54
##### AMA Style
Jumaili Alaa Mahmood AL., Yang Xiao Song, On $\nabla^{**}$-distance and fixed point theorems in generalized partially ordered $D^*$-metric spaces. J. Nonlinear Sci. Appl. (2015); 8(1):46--54
##### Chicago/Turabian Style
Jumaili, Alaa Mahmood AL., Yang, Xiao Song. "On $\nabla^{**}$-distance and fixed point theorems in generalized partially ordered $D^*$-metric spaces." Journal of Nonlinear Sciences and Applications, 8, no. 1 (2015): 46--54
### Keywords
• Fixed point theorem
• generalized $D^*$-metric spaces
• $\nabla^{**}$-distance.
• 47H10
• 54H25
### References
• [1] R. P. Agarwal, M. A. El-Gebeily, D. O'Regan, Generalized contractions in partially ordered metric spaces , Appl. Anal., 87 (2008), 1-8.
• [2] C. T. Aage, J. N. Salunke, Some fixed points theorems in generalized $D^*$-metric spaces, Appl. Sci., 12 (2010), 1-13.
• [3] A. M. AL. Jumaili, X. S. Yang, Fixed point theorems and $\nabla^{**}$-distance in partially ordered $D^*$-metric spaces, Int. J. Math. Anal., 6 (2012), 2949-2955.
• [4] L. B. Ćirić, A generalization of Banach's contraction principle, Proc. Amer. Math. Soc., 45 (1974), 267-273.
• [5] L. B. Ćirić, Coincidence and fixed points for maps on topological spaces, Topology Appl., 154 (2007), 3100-3106.
• [6] L. B. Ćirić, S. N. Jesić, M. M. Milovanović, J. S. Ume, On the steepest descent approximation method for the zeros of generalized accretive operators , Nonlinear Anal.-TMA., 69 (2008), 763-769.
• [7] B. C. Dhage, Generalized metric spaces and mappings with fixed point, Bull. Calcutta Math, Soc., 84 (1992), 329-336.
• [8] J. X. Fang, Y. Gao , Common fixed point theorems under strict contractive conditions in Menger spaces, Nonlinear Anal.-TMA., 70 (2009), 184-193.
• [9] T. Gnana Bhaskar, V. Lakshmikantham, Fixed point theorems in partially ordered metric spaces and applications, Nonlinear Anal.-TMA., 65 (2006), 1379-1393.
• [10] T. Gnana Bhaskar, V. Lakshmikantham, J. Vasundhara Devi, Monotone iterative technique for functional differential equations with retardation and anticipation, Nonlinear Anal.-TMA., 66 (2007), 2237-2242.
• [11] N. Hussain, Common fixed points in best approximation for Banach operator pairs with Ćirić type I-contractions, J. Math. Anal. Appl., 338 (2008), 1351-1363.
• [12] J. J. Nieto, R. R. Lopez, Contractive mapping theorems in partially ordered sets and applications to ordinary differential equations, Order, 22 (2005), 223-239.
• [13] V. L. Nguyen, X. T. Nguyen, Common fixed point theorem in compact $D^*$-metric spaces, Int. Math. Forum, 6 (2011), 605-612.
• [14] J. J. Nieto, R. R. Lopez, Existence and uniqueness of fixed point in partially ordered sets and applications to ordinary differential equations, Acta Math. Sin. Eng. Ser., 23 (2007), 2205-2212.
• [15] D. O'Regan, R. Saadati , Nonlinear contraction theorems in probabilistic spaces, Appl. Math. Comput., 195 (2008), 86-93.
• [16] A. Petruşel, I. A. Rus, Fixed point theorems in ordered L-spaces, Proc. Amer. Math. Soc., 134 (2006), 411-418.
• [17] A. C. M. Ran, M. C. B. Reurings, A fixed point theorem in partially ordered sets and some applications to matrix equations , Proc. Amer. Math. Soc., 132 (2004), 1435-1443.
• [18] S. Sedghi, N. Shobe, H. Zhou, A common fixed point theorem in $D^*$-metric spaces, Fixed Point Theory and Applications. Article ID 27906, (2007), 13 pages.
• [19] R. Saadati, S. M. Vaezpour, P. Vetro, B. E. Rhoades, Fixed point theorems in generalized partially ordered G-metric spaces, Mathematical and Computer Modelling, 52 (2010), 797-801.
• [20] T. Veerapandi, A. M. Pillai, Some common fixed point theorems in $D^*$- metric spaces , African J. Math. Computer Sci. Research, 4 (2011), 357-367.
• [21] T. Veerapandi, A. M. Pillai, A common fixed point theorems in $D^*$- metric spaces, African J. Math. Computer Sci. Research, 4 (8) (2011), 273-280.
|
2020-09-26 02:00:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5738754868507385, "perplexity": 4740.695166836752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400232211.54/warc/CC-MAIN-20200926004805-20200926034805-00609.warc.gz"}
|
http://www.bo-yang.net/2014/08/09/retrive-string-from-slices
|
### 1. Problem
Given a string, such as 01001010101001101011, we can randomly sliced multiple substrings. Assume that during the slicing, due to some unexpected noises, some characters may flip(0->1 or 1->0). For example:
Position: 0123456789.........
String: 01001010101001101011
slice1: 1001110101000
slice2: 1010111001111
slice3: 10101
slice4: 1101011
In above sample, slice1 starts from position 1(assume that the index of a string begins with 0), slice2 starts from position 4, slice3 starts from 4, and slice4 starts from 13. In slice1, 0 flips to 1 at position 5 and 1 flips to 0 at position 13.
For one specific position in the original string, if it is 1, then the probability of flipping to 0 in a slicing is 0.1; and vice versa(i.e.Prob(0->1)=0.1).
The problem is: if we only have multiple slices(the length of each slice may vary) and their starting positions in the string, and we don’t know the original string, given an arbitrary position in the original string, how can we calculate the probability that position is a 1?
Assume most positions will be covered at least once in slices, and we have following parameters:
p01=0.1; // Probability a ‘0’ in string but flipped to a ‘1’ in a slice
p10=0.1; // Probability a ‘1’ in string but flipped to a ‘0’ in a slice
p1=0.5; // Prior probability that any given position in string is a ‘1’
We can also assume that the string is a random string of 0s and 1s, and during slicing, each position is sampled independently.
For the above example string and four slices, we already have following probabilities for each position:
Pos Prob
0 0.500
1 0.900
2 0.100
3 0.100
4 0.999
5 0.100
6 0.999
7 0.001
8 0.999
9 0.500
10 0.988
11 0.012
12 0.012
13 0.900
14 0.988
15 0.500
16 0.988
17 0.100
18 0.900
19 0.900
### 2. Solution
Obviously, the numbers of 0s and 1s in all slices for each position can be counted using a hashmap(unordered_map in C++). However, the difficult part is to find a model to calculate the probabilities. Simple multiplicaiton or subtraction won’t work, because we cannot get 0.999 with the given probabilities and arithmetic operations. Besides, the Bayes’ theorem also cannot be used directly, because we don’t know the number of slices and the computation would be too complicated. And we also may be tempted to try Markov Model, but Markov Model cannot fit all positions in the above example, such as positions 4(1,1,1), 5(1,0,0), 9(0,1) and 13(0,1,1).
The best model to fit this problem is the Binomial Distribution:
The binomial distribution models the total number of successes in repeated trials from an infinite population under the following conditions:
• Only two outcomes are possible on each of n trials.
• The probability of success for each trial is constant.
• All trials are independent of each other.
The probability density function(pdf) of Binomial Distribution is:
As for this problem, given a position in the original string, we first need to find the probability of a 0 occurs in this position(in this case 0=>1, say bp01), and then calculate the probability of a 1 occurs in this position(in this case 1=>1, say bp11). Finally, we can get the probability of 1 in this position in the original string by: p=bp11/(bp01+bp11).
Take position 5 as an example. We get one 1 and two 0s in position 5 in three observations. We can calculate b11 using Matlab binopdf function. Since p11=1-p10=0.9, therefore we have bp11=binopdf(1,3,0.9)=0.027. As for bp01, because p01=0.1, we have bp01=binopdf(1,3,0.1)=0.243. Therefore, we can calc the probability of 1 in position 5 by p=0.027/(0.243 + 0.027)=0.1.
### 3. Source Code
The implementation of this problem can be found in my GitHub Channel.
|
2018-01-17 02:47:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7550287246704102, "perplexity": 748.5027414000539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886794.24/warc/CC-MAIN-20180117023532-20180117043532-00308.warc.gz"}
|
http://fallout.firedrakecreative.com/doku.php?id=playerhomes:watchstationaegis
|
# Tammer's Wasteland Workshop
Changing the World Since 2016
### Modding Basics
playerhomes:watchstationaegis
# Watch Station A.E.G.I.S.
Watch Station A.E.G.I.S. is a lifeless space station perched in low Earth orbit that was originally built to serve as an orbital forward operating base for the U.S. military. However, before it could become operational, the bombs fell and consumed the world in nuclear fire. For two centuries, the station has lain dormant and forgotten – little more than a fleeting glimmer in the night sky – until you rediscover it during your travels in the Wasteland.
Originally envisioned as a network of satellite platforms belonging to the American Enforcement & Guidance for International Security (A.E.G.I.S.) Initiative, only one such station was ever built before the outbreak of the Great War, and even it was never put to use. Raw materials were launched by rocket and the station was assembled in orbit by a robotic workforce, so no human has ever set foot aboard… until now.
This mod serves as a player home that also allows you to travel freely between the Capital Wasteland and Mojave Wasteland.
## Layout
Operations
### Operations
• Command Center – The control hub for the entire station, with lots of consoles, buttons, displays, and little flashing lights
Dormitory
### Dormitory
• Sleeping Quarters – Each room provides berthing for up to four crewmembers, with four rooms in total
• Lavatories – Each equipped with a shower, two sinks, and two toilets
Vista Suite
### Vista Suite
Located within the Dormitory pod, the Vista Suite is the private stateroom reserved for the station's Commanding Officer.
• Bed – Grants the Well Rested status effect
• Shower – Can be turned on and off and will scrub radiation over time when standing in the stream
• Sink – Provides purified water for drinking
• Toilet – Serves as a piece of furniture that you can sit on
• Dresser – Used for storing outfits and clothing
• Universal Fabricator – Serves as a combination workbench, reloading press, and campfire
• Jukebox – Plays jazzy/swing tracks and accepts Caps as payment
• Couch – Seats three
• Conference Area – Seats six
Recreation Deck
### Recreation Deck
• Commissary Terminal – Allows you to purchase food and drink
• Greenspace – Provides supplemental oxygen, as well as a hangout space
• Observation Area – Tables and chairs provide views of the Earth or deep space
• Seating for up to thirty-six people
Infirmary
### Infirmary
• Auto-Doc – Treats injuries and crippled limbs
• Chemistry Set – Allows you to craft chems
D.R.O.P. Bay
### D.R.O.P. Bay
• D.R.O.P. Control Terminal – Allows you to select an insertion point
• D.R.O.P. Pods – Six pods which allow you to deploy to the surface, provided you are wearing power armor
Armory
### Armory
• X-00/F Orbital Freefall Prototype Armor – A unique suit of power armor that must be repaired
• Lockers – Plenty of room for storage
• Workbenches – Allow you to craft items or repair equipment
Cargo Bay
### Cargo Bay
• Crates – DO NOT TAMPER WITH THE CRATES!
Docking Bay
### Docking Bay
• Delta IX Shuttle – Provides conventional transportation back to Earth (Be sure to activate the door, not the body of the craft)
Observation Ring
### Observation Ring
• Seating for up to six people, to enjoy the serene rotation of the Earth
## Features
The station's modules are interconnected by a “transit system” that lets you go straight to any one module from any of the others. One thing I disliked about other space station mods I found was that you typically had to actively walk through each cell in order to reach the one you want, which often made it frustrating and confusing to navigate. This method is a compromise between realism and convenience.
The station also features H.O.L.L.I., a holographic onboard assistant to talk to, who you can ask about almost any item or location aboard the station. She also acts as a vendor, and is the only source of the Interlink Booster chem.
## Access
### Shuttles
Shuttles are the most conventional method to gain access to the station, and can be found at Nellis AFB and Adams AFB. A shuttle flight takes six in-game hours.
• Nellis AFB: A hidden underground hangar at Nellis can be found by walking north-northeast along the runway from the Nellis Array until you come across a watchtower and an artillery battery. The entrance to the hangar is located between the watchtower and the artillery battery.
• Adams AFB: The shuttle at Adams is located in Hangar 3A, which can only be accessed once you have completed the quest Who Dares Wins from the Broken Steel DLC (i.e. boarded the Vertibird back to the Citadel) and then returning to Adams AFB.
The underground hangar at Nellis.
### Retrieval Beacons
Retrieval Beacons are transponders for the station's Burst-Encoded Automated Matter/Energy Relay (B.E.A.M.E.R.) which allow it to lock on to a target and teleport it up to the station. Teleportation is instantaneous, but only works to bring you onto the station from any exterior cell. There are two beacons to be found 1), one in the Capital Wasteland and another in the Mojave.
• Capital Wasteland: Around the vicinity of SatCom Array NW-07c, there is a crashed pod in a crater, inside which can be found one of the beacons. Look for the plume of black smoke.
• Mojave Wasteland: Between the Old Nuclear Test Site and the Crashed Vertibird, there is a crashed pod in a crater, inside which can be found one of the beacons. Look for the plume of black smoke.
### D.R.O.P. Pods
The Dynamic Re-entry of Orbital Personnel option simulates being dropped from orbit in a re-entry capsule to land at any point on Earth, like in Robert A. Heinlein's seminal novel Starship Troopers. Currently, you can choose between eight locations to deploy to: Camp McCarran, The Fort or The Strip in the Mojave, or The Citadel, Rivet City, Megaton, Paradise Falls or Evergreen Mills in the Capital Wasteland. A D.R.O.P. insertion takes ninety in-game minutes.
If you can't find the beacon, it may have clipped through the floor. Use the console command Player.AddItem ##AE0001 1 to spawn a new one.
|
2021-10-27 06:25:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29410409927368164, "perplexity": 7504.252776682092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00077.warc.gz"}
|
https://math.stackexchange.com/questions/2496250/proving-lower-bound-on-number-of-queries-to-oracle
|
# Proving lower bound on number of queries to oracle?
Suppose that you are given a polynomial $p(x)$ as a black box (i.e. some oracle, to which you feed $x$ and it returns $p(x)$). It is known that the coefficients of $p(x)$ are integers. How do you determine what $p(x)$ is in the quickest way possible?
Question : How to prove the lower bound that you need at least $d$ many queries if the input polynomial has degree $d$?
$d$ queries at $a_1$, $\ldots$, $a_d$ cannot determine a polynomial $P$ of degree $d$ because all the polynomials $P + k(x-a_1)\cdots(x-a_d)$ give the same output for all those queries. So you need at least $d+1$. Clearly, $d+1$ are also sufficient.
|
2021-04-10 14:32:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9212121963500977, "perplexity": 120.84049109708305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057142.4/warc/CC-MAIN-20210410134715-20210410164715-00418.warc.gz"}
|
https://iris.polito.it/handle/11583/2729938
|
We consider the Schrödinger equation with a subcritical focusing power nonlinearity on a noncompactmetricgraph,andprovethatforeveryfiniteedgethereexistsathresholdvalueof themass,beyondwhichthereexistsapositiveboundstateachievingitsmaximumonthatedge only. This bound state is characterized as a minimizer of the energy functional associated to the NLS equation, with an additional constraint (besides the mass prescription): this requires particular care in proving that the minimizer satisfies the Euler–Lagrange equation. As a consequence, for a sufficiently large mass every finite edge of the graph hosts at least one positive bound state that, owing to its minimality property, is orbitally stable.
Multiple positive bound states for the subcritical NLS equation on metric graphs / Adami, Riccardo; Serra, Enrico; Tilli, Paolo. - In: CALCULUS OF VARIATIONS AND PARTIAL DIFFERENTIAL EQUATIONS. - ISSN 0944-2669. - STAMPA. - 58:1(2019). [10.1007/s00526-018-1461-4]
### Multiple positive bound states for the subcritical NLS equation on metric graphs
#### Abstract
We consider the Schrödinger equation with a subcritical focusing power nonlinearity on a noncompactmetricgraph,andprovethatforeveryfiniteedgethereexistsathresholdvalueof themass,beyondwhichthereexistsapositiveboundstateachievingitsmaximumonthatedge only. This bound state is characterized as a minimizer of the energy functional associated to the NLS equation, with an additional constraint (besides the mass prescription): this requires particular care in proving that the minimizer satisfies the Euler–Lagrange equation. As a consequence, for a sufficiently large mass every finite edge of the graph hosts at least one positive bound state that, owing to its minimality property, is orbitally stable.
##### Scheda breve Scheda completa Scheda completa (DC)
File in questo prodotto:
File
AST_multiple_bound_states.pdf
non disponibili
Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 681.32 kB
Formato Adobe PDF
adami_serra_tilli_multiple_positive_bound_states.pdf
non disponibili
Tipologia: 1. Preprint / submitted version [pre- review]
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 210.23 kB
Formato Adobe PDF
##### Pubblicazioni consigliate
Caricamento pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
Utilizza questo identificativo per citare o creare un link a questo documento: `https://hdl.handle.net/11583/2729938`
|
2022-12-04 15:06:57
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8621140122413635, "perplexity": 5684.422233845161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710974.36/warc/CC-MAIN-20221204140455-20221204170455-00388.warc.gz"}
|
https://physics.stackexchange.com/questions/196286/what-does-chemical-potential-mu-0-mean
|
# What does chemical potential $\mu = 0$ mean?
First off, just to be clear, the chemical potential being equal to zero is different from not having a chemical potential at all (e.g. a photon gas)?
Now: physically, what does having chemical potential $\mu=0$ mean for a gas?
A Bose-Einstein condensate exists below a critical temperature $T_c$, at which $\mu$ hits $0$. What's the connection between chemical potential and Bose-Einstein condensation?
• I just wanted to add one more question with a different perspective : What's the $\mu=0$ scenario for charged black holes in AdS/CFT? What does it actually mean? – Physics Moron Jul 28 '15 at 21:45
When the chemical potential is 0 the extra free energy needed to add or remove a particle to the system is 0(i.e $\mu=\frac{dA}{dN}=0$. So particles can leave and enter the system without changing the (free) energy.
In A BEC all particles have condensed to the ground state of the system. Particles entering or leaving the system will be added to the ground state or leave from the ground state. If your particles are non interacting the only contribution to energy is Kinetic which is monotonically increasing from 0 with respect to momentum. This means your ground state will have 0 energy and adding or removing a particle to the ground state adds or subtracts 0 energy so $\mu=0$.
• Thanks. And silly question here. What about the $mc^2$ contribution from the rest mass of the particle? – SuperCiocia Jul 28 '15 at 23:58
|
2019-06-26 22:02:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5701388716697693, "perplexity": 256.88356402306687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000575.75/warc/CC-MAIN-20190626214837-20190627000837-00155.warc.gz"}
|
http://www.math.cmu.edu/PIRE/pub/publication.php?Publication=25
|
Science at the triple point between mathematics, mechanics and materials science
Publication 25
A simple and efficient scheme for phase field crystal simulation
Authors:
Matt Elsey
Courant Institute of Mathematical Sciences
New York University
Benedikt Wirth
Abstract:
We propose an unconditionally stable semi-implicit time discretization of the phase field crystal evolution. It is based on splitting the underlying energy into convex and concave parts and then performing $H^{-1}$ gradient descent steps implicitly for the former and explicitly for the latter. The splitting is effected in such a way that the resulting equations are linear in each time step and allow an extremely simple implementation and efficient solution. We provide the associated stability and error analysis as well as numerical experiments to validate the method's efficiency.
Get the paper in its entirety
StableandeffElsey.pdf
Back to Publications
|
2018-02-24 02:25:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20675989985466003, "perplexity": 590.5495753433755}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815034.13/warc/CC-MAIN-20180224013638-20180224033638-00728.warc.gz"}
|
https://www.gradesaver.com/textbooks/science/physics/university-physics-with-modern-physics-14th-edition/chapter-7-potential-energy-and-energy-conservation-problems-exercises-page-231/7-37
|
## University Physics with Modern Physics (14th Edition)
Published by Pearson
# Chapter 7 - Potential Energy and Energy Conservation - Problems - Exercises - Page 231: 7.37
#### Answer
(a) The force of friction on the box is 637 N. Since the system is at rest, the force of friction on the bag of gravel is zero. (b) The speed of the bucket will be 2.99 m/s
#### Work Step by Step
(a) We can find the maximum possible force of static friction on the box when the bag of gravel is on the box. $F_f = mg~\mu_s = (130.0~kg)(9.80~m/s^2)(0.700)$ $F_f = 892~N$ The weight of the bucket is $(65.0~kg)(9.80~m/s^2)$, which is 637 N. Since the maximum possible force of static friction on the box is greater than the weight of the bucket, the system is at rest. The force of friction on the box is 637 N. Since the system is at rest, the force of friction on the bag of gravel is zero. (b) Let $m_1$ be the mass of the bucket. Let $m_2$ be the mass of the box. $K_2+U_2 = K_1+U_1+W_f$ $\frac{1}{2}(m_1+m_2)v^2 = 0 + m_1gh - m_2g~\mu_k~d$ $v^2 = \frac{2m_1gh - 2m_2g~\mu_k~d}{m_1+m_2}$ $v = \sqrt{\frac{2m_1gh - 2m_2g~\mu_k~d}{m_1+m_2}}$ $v = \sqrt{\frac{(2)(65.0~kg)(9.80~m/s^2)(2.00~m) - (2)(80.0~kg)(9.80~m/s^2)(0.400)(2.00~m)}{65.0~kg+80.0~kg}}$ $v = 2.99~m/s$ The speed of the bucket will be 2.99 m/s, We can use Newton's laws to check this answer. $ma = \sum F$ $a = \frac{m_1g - m_2g~\mu_k}{m_1+m_2}$ $a = \frac{(65.0~kg)(9.80~m/s^2) - (80.0~kg)(9.80~m/s^2)(0.400)}{145.0~kg}$ $a = 2.23~m/s^2$ We can use the acceleration to find the speed of the system. $v^2=v_0^2+2ay = 0+2ad$ $v = \sqrt{2ay} = \sqrt{(2)(2.23~m/s^2)(2.00~m)}$ $v = 2.99~m/s$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2019-11-22 18:25:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.765955924987793, "perplexity": 175.06466843293933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671411.14/warc/CC-MAIN-20191122171140-20191122200140-00508.warc.gz"}
|
https://examspace.net/simple-interest-basics.html
|
HomeGeneral Mathsimple interest basics | simple interest amount formula
# simple interest basics | simple interest amount formula
In this article simple interest basics, you will learn the basics of simple interest and formulas to calculate and solve problems related to simple questions and answers.
Suppose a person x borrowed some amount of money from someone. so what we see in our daily life is that when this person X will return the borrowed amount to Lender Y, he has to return some extra amount with originally borrowed money. The extra amount that Person X Pay to Person Y is known as Interest.
You can consider Interest as rent on the money borrowed. Here in this very 1st example, Person X utilized the lender’s money for his personal work, and later he will pay the money or the item that he has borrowed from someone, and for the use of that money, he agreed to pay the extra amount.
Money is important in our life. in this capitalistic world, no one can deny the importance of money in our life. when you will grow older you will have to buy many things for that either you will pay money from your savings or you will take that money from financial institutions. if you will take that money from financial institutions you will have to pay interest to them hence it is very important to know about Interests and Formulae to calculate it.
Content Index
## Types of Interest
Friends Interests are of two categories. in which simple interest is more important as for most financial dealings we will have to pay the interest as per simple interest only… while if you will understand the power of compounding or compound interest it will change your life
1. Simple interest and
2. Compound interest.
1st we will discuss simple interest we will understand the basic concept of simple interest and the terms related to it and the formula to solve related questions.
Principal:-
The principal is the amount of money that someone borrows from a Lender or Financial institution.
Amount:-
The amount is the sum of Principal and accumulated Interest over time.
Rate of Interest:-
The rate of interest is the value of money that someone pays for per hundred of the borrowed amount. The rate can be of two types 1). rate per month and2) rate per annum.
Local lenders charge per reply rate per month institutional lenders demands for your reply rate per annum
## Formula related to Interest
\\ Simple Interest= \frac{principal\times time\times rate} {100}
\\ Principal= \frac{simple\ interest\times100} {rates\times time }
\\ Rate= \frac{interest\times 100} {principal \times time }
\\ time= \frac{interest \times 100}{ principal \times rate }
\\ principal= \frac{interest \times 100}{time \times rate }
\\ Amount=Principal+ Interest
Till now you have understood the simple interest basics and formula to solve various questions related to simple interest. Friend interest is the extra amount that someone pays with the principal amount to the lender..
Now let’s talk about the other type of interest i.e Compound Interest
## Compound Interest
Compound Interest is the eighth wonder of this world. In compound Interest, Interest is charged not only for the Principal amount but Interest is charged on the interest too. so in compound interest one either get or pays Interest on Interest.
if the borrower doesn’t pay the money on time then after a given time Interest is added to the principal.
The principal keeps on increasing at the end of every year by an amount to the interest for that specific year.
It may be good from the lender’s point of view but it is very dangerous for borrowers.
Below I am providing the list of formulas that will be helpful for the students to solve compound interest-related questions.
## Formula for Compound Interest
\color{blue} {
\\ compound interest= P\left(1+ \frac{r}{n}\right) ^n - P
}
\\
\\
\\ P= Principal Amount \\ r= Rate \ of\ Interest \\ n= Time Period
## FAQ on Simple Interest Basics
### What is Simple Interest?
Simple Interest is the extra amount of money that someone pays for the borrowed money.
### What is the formula of Simple Interest
Simple Interest=( principal×time×rate)/100
### What are the 3 factors used in calculating simple interest?
The factors used in the calculation of simple interest are Principal amount value, Time, and Rate of Interest.
### Where is simple interest used?
Almost for all short-term loans, personal loans, car loans simple interest is charged.
### Banks use Simple Interest or Compound Interest
for timely payments, banks used simple interest but if someone pays beyond the due date then the scenario may be different.
|
2022-12-05 02:21:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7982089519500732, "perplexity": 2141.1489905824596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711001.28/warc/CC-MAIN-20221205000525-20221205030525-00451.warc.gz"}
|
https://gist.github.com/Reinmar/c23f70232ce84cb8011f
|
{{ message }}
Instantly share code, notes, and snippets.
Last active Aug 29, 2015
## General Rules
1. License of resources (libraries, tools, fonts, images, everything) which are installed via npm or any other package manager (what means that they are not part of our repository) and are not included in any form in any release package do not concern us. It means that as long as a resource (or any its parts) is not included in our repository or any release package, we can freely use it (of course, as long as we have rights to use it). Example: licenses of your operating system or a LESS compiler are not a problem even if these tools were licensed under GPL. Example 2: You cannot use Photoshop if you haven't bought a license for it.
2. Before using any resource that does not fall into the previous category, consult with WW, FCK or PK if its license does allow this.
## Project specific rules
1. CKEditor
2. Licenses of all resources included in the ckeditor-dev repository or any release package must be specified in "Sources of Intellectual Property Included in CKEditor" section of the LICENSE.md file and full text of their licenses must be included as appendix in this file. Note: This section is divided into source code and release code sections.
3. If a 3rd party resource is concatenated with our resource, then the result file must include our license pointing to the LICENSE.md (which mentions that a file XYZ contains a resource Foo licensed under Bar). Other licenses may be removed (so the result file contains only one license comment).
### fredck commented May 5, 2015
We've been discussing this topic in the Drupal side as we had recently issues with the MathJax license. The conclusion seems to be that, if a third-party software is assembled with our software, a proper license must be available. No matter if that software is included in our repository or simply linked to it through npm, bower, script injection or whatever. For example, if we create a builder application that uses other npm applications to do its job, the licenses of both apps must be compatible. The same for icons or things that are to be included in a build or even participate on the execution of any of our software. Tools instead, like compilers and linters are not a problem ofc. For example, they can be referenced in package.json in our projects, for example, because we'll be running then standalone.
to join this conversation on GitHub. Already have an account? Sign in to comment
|
2021-06-23 13:08:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19411341845989227, "perplexity": 2251.224091818928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488538041.86/warc/CC-MAIN-20210623103524-20210623133524-00130.warc.gz"}
|
http://software.imdea.org/events/fall_software_seminar_2009.html
|
## Software Seminar Series (S3) - Fall 2009
### Tuesday, December 15, 2009
Álvaro García Pérez, PhD Student, IMDEA Software Institute
### The beta-cube. A space of evaluation strategies for the untyped lambda-calculus
#### Abstract:
Different evaluation orders for the untyped(1) lambda-calculus exists (call-by-value, call-by-name, normal order, applicative order...), reflecting the nuances in the evaluation of a system which serves as the foundation of functional programming languages.
In this talk, I will introduce a generic evaluator (written in Haskell) which can be instantiated to any evaluator realising a particular evaluation order. For this purpose, I will recall some notions of the untyped lambda-calculus, give an algebraic data type representing lambda-terms, present the big-step semantics of the evaluation orders using natural deduction rules, implement them (using CPS following Reynolds' advice and showing alternative solutions in Haskell), show how monads can help to write neater code, present a way to hybridate existing evaluation orders to produce new ones, comment about an absortion theorem regarding hybridation and describe something I call the "beta-cube".
(1) We prefer to use "untyped" rather than "pure" so as to avoid recent controversy regarding the latter word.
### Tuesday, December 1, 2009
Julian Samborski Forlese, PhD Student, IMDEA Software Institute
### Dependent Types for Low-Level Programming
#### Abstract:
Types provide a convenient and accesible mechanism for specifying program invariants. Dependent types extends simple types with the ability to express invariants relating multiple state elements. While such dependencies likely exists in all programs, they play a fundamental role in low-level programming. These dependencies are essential even to prove simple properties like memory safety. In this talk I will present an overview of a type system that combines dependent types and mutation for variables and for heap-allocated structures, and a technique for automatically inferring dependent types.
Note: The talk will be based on the paper ”Dependent Types for Low-Level Programming” by Jeremy Condit, Mathew Harren, Zachary Anderson, David Gay, and George Necula.
### Tuesday, November 24, 2009
Angel Herranz, Assistant Professor, BABEL, UPM
### Validation of UML Models with Uppaal. A Case Study in ERTMS
#### Abstract:
There is a growing interest in the use of UML for the modeling of embedded and hybrid systems. A key factor for the adoption of model driven methods is the possibility to promptly validate designs and reveal inconsistent and incomplete requirements. But most of the interesting properties to check are beyond the capabilities of common tools. We show a transformation of UML models into Uppaal specifications that can then be used to analyse complex behavioural properties. An industrial case study on the modelling of ERTMS is used to show the applicability of our approach in the detection of property violations.
### Tuesday, November 17, 2009
Edison Mera, PhD Student, The CLIP Laboratory, UPM
### Integrating Software Testing and Run-Time Checking in an Assertion Verification Framework
#### Abstract:
We present a framework that unifies unit testing and run-time verification (as well as static verification and static debugging). A key contribution of our overall approach is that we preserve the use of a unified assertion language for all of these tasks. We first describe a method for compiling run-time checks for (parts of) assertions which cannot be verified at compile-time via program transformation.
This transformation allows checking preconditions and postconditions, including conditional postconditions, properties at arbitrary program points, and certain computational properties.
The implemented transformation includes several optimizations to reduce run-time overhead. Most importantly, we propose a minimal addition to the assertion language which allows defining unit tests to be run in order to detect possible violations of the (partial) specifications expressed by the assertions.
We have implemented the framework within the Ciao/CiaoPP system and effectively applied it to the verification of ISO Prolog compliance and to the detection of different types of bugs in the Ciao system source code. We have performed some experiments to assess different trade-offs among program size, running time, or levels of verbosity of the messages shown to the user.
The talk will be divided into two parts. The first one will be devoted to the description of the unified framework, and a demonstration of the system will be performed in the second part.
### Tuesday, November 10, 2009
Juan Manuel Crespo, PhD Student, IMDEA Software Institute
### Type theory and type conversion
#### Abstract:
The definition of type equivalence plays a crucial role in any statically typed language. In dependent type theories, there is no syntactic distinction between terms and types, and this notion of equivalence, generally referred to as conversion, is fundamental for typechecking.
Dependent type theories are classified in two categories according to the way in which conversion is handled:
extensional type theories ( as implemented in NuPRL ) identify conversion with propositional equality (expressed as a type and used for reasoning) resulting in powerful systems with undecidable typechecking;
intensional type theories ( as implemented in Coq, Agda and Epigram) use a decidable notion of conversion, typically $\beta$-equivalence, and rely on strong normalisation to guarantee decidable typechecking. In the talk we will focus on this class.
The talk will consist of two parts: first I will show that in some cases the notion of conversion present in intesional type theories can be too weak. I will do so through a set of simple practical examples developed in Agda2. In the second part I will informally review some research aimed at extending conversion without compromising decidability of typechecking.
### Tuesday, November 3, 2009
Santiago Romero, Research Intern, IMDEA Software Institute
### Runtime monitoring of asynchronous systems with calls and returns (plus statistical information)
#### Abstract:
Interest on runtime verification has grown in recent years and a lot of work has been focused on finding suitable formalisms to express the properties to be monitored. Many interesting properties such as correctness of procedures with respect to pre and post conditions and properties on the execution stack cannot be written in plain LTL, and a few alternatives have been presented to achieve it.
In this talk we present LOLA, a simple and expressive specification language that allows statistical information to be gathered along the trace. We also briefly describe the algorithm for online monitoring of asynchronous systems and how we extend this language to specify context and stack sensitive properties. Finally, we show how this extension of LOLA allows interesting properties to be precisely expressed by means of examples.
This is joint work with César Sánchez.
### Tuesday, October 27, 2009
Alan Mycroft, Invited researcher, IMDEA Software Institute
### Program Testing via Hoare-style Specifications
#### Abstract:
Program Validation (testing) and Verification (formal proof) are too often seen as disjoint subject areas. We observe that a typical unit test' (e.g. in JUnit) first creates some data structure, then invokes the procedure under test, then checks an assertion --in essence a Hoare Triple {P}C{Q} but opaquely coded. The Hoare-triple view of tests greatly simplifies the coding of tests, expecially for procedures which mutate data structures or raise exceptions. Such tests can be implemented using transactional memory to provide access to both x and old(x), and for more high-level constructs such as modifiesonly(x.f,y.g). We show how such tests may be compiled into standard Java. Moreover, this view encourages generalised tests in which preconditions can use logical forms such as forall' and implication. Transactional techniques allow such generalised tests to be used during execution on real data as a form of a test mode. This is joint work with Kathryn Gray, extending FASE'2009 work.
### Tuesday, October 13, 2009
Federico Olmedo, PhD Student, IMDEA Software Institute
### Provable security of cryptographic schemes
#### Abstract:
For a long time, the arguments that members of the cryptographic community used to exhibit in favor of the security of cryptosystems were deficient and weak (eg empiric validations, wrong proofs). On the contrary, provable security aims to provide the users with more rigorous arguments in favor of cryptographic schemes' security.
The concept of provable security was introduced by Goldwasser and Micali in their seminar paper Probabilistic Encryption in 1984. It heavily relies on the "computational model". In this talk we will present the computational model of security and the keys ideas underlaying provable security. This encompasses describing the sort of attackers it considers, how "security" is broadly defined and which proof techniques are used. Regarding proof methodologies, we will specially focus on the "game-playing" technique and (if not running out of time) we will present a proof of ElGamal IN-CPA security using this framework.
### Tuesday, October 6, 2009
Pedro R. D'Argenio, Professor, FaMAF, Universidad Nacional de Córdoba - CONICET, Argentina
### Partial Order Reduction for Probabilistic Systems: A Revision for Distributed Schedulers
#### Abstract:
The technique of partial order reduction (POR) for probabilistic model checking prunes the state space of the model so that a maximizing scheduler and a minimizing one persist in the reduced system. This technique extends Peled’s original restrictions with a new one specially tailored to deal with probabilities.
It has been argued that not all schedulers provide appropriate resolutions of nondeterminism and they yield overly safe answers on systems of distributed nature or that partially hide information. In this setting, maximum and minimum probabilities are obtained considering only the subset of so-called distributed or partial information schedulers.
In this article we revise the technique of partial order reduction (POR) for LTL properties applied to probabilistic model checking. Our reduction ensures that distributed schedulers are preserved. We focus on two classes of distributed schedulers and show that Peled’s restrictions are valid whenever schedulers use only local information. We show experimental results in which the elimination of the extra restriction leads to significant improvements.
### Tuesday, September 29, 2009
Juan Cespedes, System Administrator and Developer, IMDEA Software Institute
### Foundations of Dynamic Tracing
#### Abstract:
In this talk I will present the first principles of dynamic tracing, in particular tracing of systems programs. Tracing is used for building debuggers, monitors and runtime-verification tools.
I will start by introducing the support that modern processors and operatings offer for tracing, focusing on the facilities exposed by the Linux kernel. Then, I will describe how a dynamic library tracer can be built using this infrastructure. The paramount example is the tool `ltrace' which I implemented a while ago.
Finally, I will describe the changes required to extend a dynamic library tracer into a runtime-verification infrastructure that handles concurrency (both light and heavy threads, both simulated concurrency and real-multicore multi-processors). Bagheera is a toolkit being developed at IMDEA-Software that implements these extensions and is reprogammable.
Bagheera is joint work with Santiago Romero and Cesar Sanchez.
Software Seminar Series (S3) - Spring 2009
|
2017-03-28 12:05:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3864884078502655, "perplexity": 2249.3834029500385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189734.17/warc/CC-MAIN-20170322212949-00082-ip-10-233-31-227.ec2.internal.warc.gz"}
|
http://nd.ics.org.ru/archive_nd/v12n1/
|
0
2013
Impact Factor
# Vol. 12, No. 1, 2016
Matyushkin I. V. Abstract The properties of an $e^{iz}$ map are studied. It is proved that the map has one stable and an infinite number of unstable equilibrium positions. There are an infinite number of repellent twoperiodic cycles. The nonexistence of wandering points is heuristically shown by using MATLAB. The definition of helicity points is given. As for other hyperbolic maps, Cantor bouquets are visualized for the Julia and Mandelbrot sets. Keywords: holomorphic dynamics, fractal, Cantor bouquet, hyperbolic map Citation: Matyushkin I. V., On some properties of an ${\rm exp}(iz)$ map , Rus. J. Nonlin. Dyn., 2016, Vol. 12, No. 1, pp. 3-15 DOI:10.20537/nd1601001
Morozov Y. Abstract We consider a class of symmetric planar Filippov systems. We find the interval of variation of the bifurcation parameter for which there is an unstable limit cycle. There exist stationary points into the domain, which has this cycle as a boundary. The type of points depends on the value of the bifurcation parameter. There is a redistribution of the area, bounded by this cycle, between the attraction domains of stationary points. The results of numerical simulations are presented for the most interesting values of the bifurcation parameter. Keywords: limit cycle, planar system with a discontinuous right-hand side, global bifurcation Citation: Morozov Y., The limit cycle as a result of global bifurcation in a class of symmetric systems with discontinuous right-hand side, Rus. J. Nonlin. Dyn., 2016, Vol. 12, No. 1, pp. 17-30 DOI:10.20537/nd1601002
Kostromina O. S. Abstract Small time-periodic perturbations of an asymmetric Duffing – Van der Pol equation with a homoclinic “figure-eight” of a saddle are considered. Using the Melnikov analytical method and numerical simulations, basic bifurcations associated with the presence of a non-rough homoclinic curve in this equation are studied. In the main parameter plane the bifurcation diagram for the Poincaré map is constructed. Depending on the parameters, the boundaries of attraction basins of stable fixed (periodic) points of the direct (inverse) Poincaré map are investigated. It is ascertained that the transition moment of the fractal dimension of attraction basin boundaries of attractors through the unit may be preceded by the moment of occurrence of the first homoclinic tangency of the invariant curves of the saddle fixed point. Keywords: bifurcations, homoclinic Poincaré structures, attraction basins, fractal dimension, sensitive dependence on initial conditions Citation: Kostromina O. S., On the investigation of the bifurcation and chaotic phenomena in the system with a homoclinic “figure-eight”, Rus. J. Nonlin. Dyn., 2016, Vol. 12, No. 1, pp. 31-52 DOI:10.20537/nd1601003
Jalnine A. Y. Abstract In the present paper we consider a family of coupled self-oscillatory systems presented by pairs of coupled van der Pol generators and FitzHugh–Nagumo neural models, with the parameters being periodically modulated in anti-phase, so that the subsystems undergo alternate excitation with a successive transmission of the phase of oscillations from one subsystem to another. It is shown that, due to the choice of the parameter modulation and coupling methods, one can observe a whole spectrum of robust chaotic dynamical regimes, taking the form ranging from quasiharmonic ones (with a chaotically floating phase) to the well-defined neural oscillations, which represent a sequence of amplitude bursts, in which the phase dynamics of oscillatory spikes is described by a chaotic mapping of Bernoulli type. It is also shown that 4D maps arising in a stroboscopic Poincaré section of the model flow systems universally possess a hyperbolic strange attractor of the Smale–Williams type. The results are confirmed by analysis of phase portraits and time series, by numerical calculation of Lyapunov exponents and their parameter dependencies, as well as by direct computation of the distributions of angles between stable and unstable tangent subspaces of chaotic trajectories. Keywords: chaos, hyperbolicity, Smale–Williams attractor, neurons, FitzHugh–Nagumo model Citation: Jalnine A. Y., From quasiharmonic oscillations to neural spikes and bursts: a variety of hyperbolic chaotic regimes based on Smale – Williams attractor, Rus. J. Nonlin. Dyn., 2016, Vol. 12, No. 1, pp. 53-73 DOI:10.20537/nd1601004
Markeev A. P. Abstract We study the inertial motion of a material point in a planar domain bounded by two coaxial parabolas. Inside the domain the point moves along a straight line, the collisions with the boundary curves are assumed to be perfectly elastic. There is a two-link periodic trajectory, for which the point alternately collides with the boundary parabolas at their vertices, and in the intervals between collisions it moves along the common axis of the parabolas. We study the nonlinear problem of stability of the two-link trajectory of the point. Keywords: map, canonical transformations, Hamilton system, stability Citation: Markeev A. P., On the stability of the two-link trajectory of the parabolic Birkhoff billiards, Rus. J. Nonlin. Dyn., 2016, Vol. 12, No. 1, pp. 75-90 DOI:10.20537/nd1601005
Munitsyn A. I., Munitsyna M. A. Abstract An analytical solution of the problem of forced oscillation of the solid parallelepiped on a horizontal base is presented. It is assumed that the slippage between the body and the base is absent, and the base moves harmonically in a horizontal direction. It is also assumed that the height of the box is much larger than the width. The dissipation of impact is taken into account in the framework of Newton’s hypothesis. The forced oscillation modes of parallelepiped corresponding to the main and two subharmonic resonances are found by using the averaging method. The results are shown in the form of amplitude-frequency characteristics. Keywords: supported plane, nonlinear oscillations, averaging method Citation: Munitsyn A. I., Munitsyna M. A., Oscillations of a solid parallelepiped on a supported base, Rus. J. Nonlin. Dyn., 2016, Vol. 12, No. 1, pp. 91-98 DOI:10.20537/nd1601006
Tenenev V. A., Vetchanin E. V., Ilaletdinov L. F. Abstract This paper is concerned with the process of the free fall of a three-bladed screw in a fluid. The investigation is performed within the framework of theories of an ideal fluid and a viscous fluid. For the case of an ideal fluid the stability of uniformly accelerated rotations (the Steklov solutions) is studied. A phenomenological model of viscous forces and torques is derived for investigation of the motion in a viscous fluid. A chart of Lyapunov exponents and bifucation diagrams are computed. It is shown that, depending on the system parameters, quasiperiodic and chaotic regimes of motion are possible. Transition to chaos occurs through cascade of period-doubling bifurcations. Keywords: ideal fluid, viscous fluid, motion of a rigid body, dynamical system, stability of motion, bifurcations, chart of Lyapunov exponents Citation: Tenenev V. A., Vetchanin E. V., Ilaletdinov L. F., Chaotic dynamics in the problem of the fall of a screw-shaped body in a fluid, Rus. J. Nonlin. Dyn., 2016, Vol. 12, No. 1, pp. 99-120 DOI:10.20537/nd1601007
Kuznetsov S. P. Abstract Dynamical equations are formulated and a numerical study is provided for selfoscillatory model systems based on the triple linkage hinge mechanism of Thurston–Weeks–Hunt–MacKay. We consider systems with a holonomic mechanical constraint of three rotators as well as systems, where three rotators interact by potential forces. We present and discuss some quantitative characteristics of the chaotic regimes (Lyapunov exponents, power spectrum). Chaotic dynamics of the models we consider are associated with hyperbolic attractors, at least, at relatively small supercriticality of the self-oscillating modes; that follows from numerical analysis of the distribution for angles of intersection of stable and unstable manifolds of phase trajectories on the attractors. In systems based on rotators with interacting potential the hyperbolicity is violated starting from a certain level of excitation. Keywords: dynamical system, chaos, hyperbolic attractor, Anosov dynamics, rotator, Lyapunov exponent, self-oscillator Citation: Kuznetsov S. P., Hyperbolic chaos in self-oscillating systems based on mechanical triple linkage: Testing absence of tangencies of stable and unstable manifolds for phase trajectories, Rus. J. Nonlin. Dyn., 2016, Vol. 12, No. 1, pp. 121-143 DOI:10.20537/nd1601008
Borisov A. V., Kilin A. A., Mamaev I. S. Abstract In this paper, we develop the results obtained by J.Hadamard and G.Hamel concerning the possibility of substituting nonholonomic constraints into the Lagrangian of the system without changing the form of the equations of motion. We formulate the conditions for correctness of such a substitution for a particular case of nonholonomic systems in the simplest and universal form. These conditions are presented in terms of both generalized velocities and quasi-velocities. We also discuss the derivation and reduction of the equations of motion of an arbitrary wheeled vehicle. In particular, we prove the equivalence (up to additional quadratures) of problems of an arbitrary wheeled vehicle and an analogous vehicle whose wheels have been replaced with skates. As examples, we consider the problems of a one-wheeled vehicle and a wheeled vehicle with two rotating wheel pairs. Keywords: nonholonomic constraint, wheeled vehicle, reduction, equations of motion Citation: Borisov A. V., Kilin A. A., Mamaev I. S., On the Hadamard–Hamel problem and the dynamics of wheeled vehicles, Rus. J. Nonlin. Dyn., 2016, Vol. 12, No. 1, pp. 145-163 DOI:10.20537/nd1601009
Abstract Citation: VI International Conference “Geometry, Dynamics, Integrable Systems – GDIS 2016”, Rus. J. Nonlin. Dyn., 2016, Vol. 12, No. 1, pp. 165-166
Back to the list
|
2019-08-22 02:25:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7202253341674805, "perplexity": 727.9867979236541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316718.64/warc/CC-MAIN-20190822022401-20190822044401-00091.warc.gz"}
|
https://testbook.com/question-answer/find-the-emf-of-the-battery-shown-in-the-figure--606da5484c52baa0e72d3e4c
|
# Find the EMF of the battery shown in the figure. The voltage drop cross the 8 Ω resistor is 20 V
This question was previously asked in
UPPCL JE EC Official Paper (Held on 25 March 2021: Shift 1)
View all UPPCL JE Papers >
1. 43.5 V
2. 67.3 V
3. 52 V
4. 24 V
Option 2 : 67.3 V
Free
CT 1: Network Theory 1
11095
10 Questions 10 Marks 10 Mins
## Detailed Solution
Concept:-
KVL: Kirchhoff’s Voltage Law (KVL) states that the sum of all the voltages around a loop is equal to zero.
KCL: Kirchhoff’s Current Law (KCL) states that the algebraic sum of all the current entering or exiting a node is always zero.
Current division: The distribution of total current in between the parallel branches of a divider circuit is known as current division.
Equation:
Current $${{\rm{I}}_1} = \frac{{{{\rm{R}}_2}}}{{{{\rm{R}}_1} + {{\rm{R}}_2}}}{{\rm{I}}_{\rm{T}}}$$ ---(1)
Where IT is the total current
I1 and I2 are the currents in branch 1 and branch 2 respectively.
Calculations:-
Given,
Voltage drop across 8 Ω resistor = 20 V
From ohm’s law equation
V = IR
$$\;{\rm{I}} = \frac{{\rm{V}}}{{\rm{R}}}$$
So, $${\rm{I}} = \frac{{20}}{8}{\rm{A}}$$ = 2.5 A ----(2)
Using current division equation from 1
$${{\rm{I}}_1} = \frac{{15 + 13}}{{15 + 13 + 11}}2.5{\rm{\;A}}$$
I1 = 1.794 A
Now applying KVL in loop 1,
-E + 20 + 11 × I + 11 × I1 = 0
⇒ E = 20 + 11 × 2.5 + 11 × 1.794 (putting values of I and I1)
⇒ E = 67.3 V
So the EMF of the battery shown is 67.3 Volts.
|
2021-09-16 10:04:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44603580236434937, "perplexity": 2221.7509076164247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053493.41/warc/CC-MAIN-20210916094919-20210916124919-00253.warc.gz"}
|
https://plosjournal.deepdyve.com/lp/springer-journals/erratum-to-hyperbolic-distortion-boundary-behaviour-and-finite-X6jn737CEj
|
# Erratum to: Hyperbolic Distortion, Boundary Behaviour and Finite Blaschke Products
Erratum to: Hyperbolic Distortion, Boundary Behaviour and Finite Blaschke Products Comput. Methods Funct. Theory (2015) 15:289–290 DOI 10.1007/s40315-015-0112-4 ERRATUM Erratum to: Hyperbolic Distortion, Boundary Behaviour and Finite Blaschke Products Nina Zorboska Published online: 25 March 2015 © Springer-Verlag Berlin Heidelberg 2015 Erratum to: Comput. Methods Funct. Theory DOI 10.1007/s40315-014-0099-2 This errata note is a correction to Proposition 2.2 in the original version of the paper. We provide here a corrected statement of the result with its corrected proof. The proposition requires an additional assumption of univalence, since the proof uses a result from [1, p. 71] which requires it. Note that the result in [1, p. 71] does require the univalence, even though it has been stated there without this assumption. Proposition 2.2 Let φ be a univalent self-map of D, and let ζ ∈ ∂ D be such that lim τ (z) = 0. Then there exists , an open subarc of ∂ D containing the point ζ , z→ζ φ such that the only possible subsets of mapped by φ into ∂ D are sets of measure zero. Proof Since lim τ (z) = 0, for any 0 < < 1, ∃δ> 0 such that τ (z)< z→ζ φ φ whenever z ∈ B(ζ, δ) ∩ http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Computational Methods and Function Theory Springer Journals
# Erratum to: Hyperbolic Distortion, Boundary Behaviour and Finite Blaschke Products
, Volume 15 (2) – Mar 25, 2015
2 pages
/lp/springer-journals/erratum-to-hyperbolic-distortion-boundary-behaviour-and-finite-X6jn737CEj
Publisher
Springer Journals
Subject
Mathematics; Analysis; Computational Mathematics and Numerical Analysis; Functions of a Complex Variable
ISSN
1617-9447
eISSN
2195-3724
DOI
10.1007/s40315-015-0112-4
Publisher site
See Article on Publisher Site
### Abstract
Comput. Methods Funct. Theory (2015) 15:289–290 DOI 10.1007/s40315-015-0112-4 ERRATUM Erratum to: Hyperbolic Distortion, Boundary Behaviour and Finite Blaschke Products Nina Zorboska Published online: 25 March 2015 © Springer-Verlag Berlin Heidelberg 2015 Erratum to: Comput. Methods Funct. Theory DOI 10.1007/s40315-014-0099-2 This errata note is a correction to Proposition 2.2 in the original version of the paper. We provide here a corrected statement of the result with its corrected proof. The proposition requires an additional assumption of univalence, since the proof uses a result from [1, p. 71] which requires it. Note that the result in [1, p. 71] does require the univalence, even though it has been stated there without this assumption. Proposition 2.2 Let φ be a univalent self-map of D, and let ζ ∈ ∂ D be such that lim τ (z) = 0. Then there exists , an open subarc of ∂ D containing the point ζ , z→ζ φ such that the only possible subsets of mapped by φ into ∂ D are sets of measure zero. Proof Since lim τ (z) = 0, for any 0 < < 1, ∃δ> 0 such that τ (z)< z→ζ φ φ whenever z ∈ B(ζ, δ) ∩
### Journal
Computational Methods and Function TheorySpringer Journals
Published: Mar 25, 2015
### References
Access the full text.
|
2022-11-26 19:42:32
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8656778335571289, "perplexity": 2336.391040882088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708046.99/warc/CC-MAIN-20221126180719-20221126210719-00179.warc.gz"}
|
https://itrf.ign.fr/en/solutions/itrf2005
|
## ITRF2005
### Description
$$\global\def\fig#1{\scriptsize\textcolor{E83E8C}{fig.#1}}$$
Unlike the previous versions of the ITRF, the ITRF2005 is constructed with input data under the form of time series of station positions and EOPs. The ITRF2005 input time-series solutions are provided in a weekly sampling by the IAG International Services of satellite techniques (IGS, ILRS and IDS) and in a daily (VLBI session-wise) basis by the IVS. Each per-technique time-series is already a combination, at a weekly basis, of the individual AC solutions of that technique, except for DORIS.
$\fig1$: ITRF2005 velocity field
### Input data
Space geodesy solutions. The used time series of solutions of space geodesy are summarized in the following table, indicating for each one, the time span and the type of constraints.
TC - AC Time span Type of constraints/solution Description
IVS 1980.0 - 2006.0 Normal equation Summary
ILRS 1992.9 - 2005.9 Loose / variance-covariance Summary
IGS 1996.0 - 2006.0 Minimal/Inner; variance-covariance Summary
IDS-IGN-JPL 1993.0 - 2006.0 Loose / variance-covariance Summary
IDS-LCA 1993.0 - 2005.8 Loose / variance-covariance Summary
#### IDS-LCA solution
L. Soudarin (CLS)
J.F. Crétaux (LEGOS-GRGS)
A series of weekly station coordinates and daily pole coordinates has been computed using 13 years (January 1993 to December 2005) of DORIS measurements collected by the instruments onboard SPOT2, SPOT3, SPOT4, SPOT5, TOPEX/Poseidon and ENVISAT. The tracking data are processed nominally on 3.5-day arc (half a week of the GPS calendar) for all the 6 satellites using the GINS/DYNAMO software of the GRGS. Individual normal equations are accumulated to form weekly combined matrices. The inversion of these matrices yields the weekly solutions for the geodetic parameters. This time series of site and pole coordinates is expressed in free network with loose constraints (10 meter and 500 mas respectively). The coordinate solutions are expressed at the median epoch of the week. Pole coordinates are estimated daily at 12:00. The gravity field model used in the orbit computation is GRIM5-C1.
### Computation strategy
$$\global\def\fig#1{\scriptsize\textcolor{E83E8C}{fig.#1}}$$
The strategy adopted for the ITRF2005 generation consists in the following steps, illustrated in $\fig1$:
$\fig1\\~$
• Apply minimum constraints equally to all loosely constrained solutions: this is the case of SLR and DORIS solutions
• Apply No-Net-Translation and No-Net-Rotation condition to IVS solutions provided under the form of Normal Equation
• Use as they are minimally constrained solutions: this is the case of IGS weekly solutions
• Form per-technique combinations (TRF + EOP), by rigorously staking the time series, solving for station positions, velocities, EOPs and 7 transformation parameters for each weekly (daily in case of VLBI) solution w.r.t the per-technique cumulative solution.
• Identify and reject/de-weight outliers and properly handle discontinuities using piece-wise approach. The discontinuity files used for each one of the 4 techniques are found here.
• Combine if necessary cumulative solutions of a given technique into a unique solution: this is the case of the two DORIS solutions.
• Combine the per-technique combinations adding local ties in co-location sites.
This final step yields the final ITRF2005 solution comprising station positions, velocities and EOPs. Note that EOPs start in the early eighties with VLBI, SLR and DORIS contribution starts on 1993 and GPS on 1999.5. The quality of the early VLBI EOPs is not as good as the combined EOPs starting on 1993.
### Frame Definition
#### Origin
The ITRF2005 origin is defined in such a way that there are null translation parameters at epoch 2000.0 and null translation rates between the ITRF2005 and the ILRS SLR time series.
#### Scale
The ITRF2005 scale is defined in such a way that there are null scale factor at epoch 2000.0 and null scale rate between the ITRF2005 and IVS VLBI time series.
#### Orientation
The ITRF2005 orientation is defined in such a way that there are null rotation parameters at epoch 2000.0 and null rotation rates between the ITRF2005 and ITRF2000. These two conditions are applied over a core network (see section transformation parameters between ITRF2005 and ITRF2000).
### ITRF2005 files
#### Station residuals files
The first step of ITRF2005 strategy is the independent computation of a long-term stacked TRF for each measurement technique under the assumption of linear motions. The associated EOPs are readjusted at the same time to make them consistent with their stacked TRFs. A break-wise approach is used to take into account discontinuities in the time series coming from any physical motion of the ground caused by an earthquake or a change in equipment.
The residuals of these computations, based on the standard relationship linking 2 frames, represent the non-linear part of point behaviors with respect to the secular frame. They can consequently be interpreted as point motions viewed by space geodesy.
##### Download zip files of time series residuals expressed in the local frame
File format description: yyyy.yyyyy res sig soln
• yyyy.yyyyy : epoch in decimal year.
• res : residual value in millimeter in North , East and Up component depending on the file extension (.DN)(.DE) and (*.DH) respectively.
• sigma: 1-sigma formal error.
• soln : solution number (see the discontinuity file).
#### SINEX and other files
The ITRF2005 files are available via FTP :
• host: itrf-ftp.ign.fr
• folder: pub/itrf/itrf2005
Notes: Users of ITRF2005 SINEX files should be aware that the standard deviations listed in the field STD_DEV of the +SOLUTION/ESTIMATE block are scaled by the Root Square of the Variance Factor (RSVF) of the ITRF2005 combination. The RSVF is equal to 0.690717136561903. The Variance-Covariance matrices of the ITRF2005 SINEX files are not scaled by the variance factor.
Full Solution: Station Positions, Velocities and EOP's
Full Solution: Station Positions & Velocities
VLBI Station positions, velocities and EOPs starting on 1980.0
VLBI Station positions and velocities
SLR Station positions, velocities and EOPs starting on 1993.0
SLR Station positions & velocities
GPS Station positions, velocities and EOPs starting on 1999:157
GPS Station positions & velocities
DORIS Station positions, velocities and EOPs starting on 1993.0
DORIS Station positions & velocities
Complete EOP list file: one line per day (MJD) listing all parameter of that day.
Complete EOP list file: one parameter per line.
Station position and velocity Residuals of the ITRF2005 combination of the 4 techniques
#### For SLR users of the ITRF2005
Because the ITRF2005 combination revealed a scale inconsistency between SLR and VLBI solutions of 1.0 ppb at epoch 2000.0 and a scale rate of 0.08 ppb/yr, and the fact that the ITRF2005 scale is defined by VLBI, it was decided to make available to SLR users an SLR solution extracted from the ITRF2005 and re-scaled back by the aforementioned scale and scale rate. This solution is available by FTP provided in two SINEX files:
Two corresponding tables listing station positions and velocities with their sigmas are also available:
### Transformation Parameters from ITRF2005 to ITRF2000
$$\global\def\fig#1{\scriptsize\textcolor{E83E8C}{fig.#1}}$$
14 transformation parameters from ITRF2005 to ITRF2000 have been estimated using 171 stations listed in the core network list and located at 131 sites shown on the map below ($\fig2$).
T1
mm
T2
mm
T3
mm
D
10-9
R1
mas
R2
mas
R3
mas
0.1 -0.8 -5.8 0.40 0.000 0.000 0.000
$\pm$ 0.3 0.3 0.3 0.05 0.012 0.012 0.012
Rates -0.2 0.1 -1.8 0.08 0.000 0.000 0.000
$\pm$ 0.3 0.3 0.3 0.05 0.012 0.012 0.012
$\fig1$: Transformation parameters at epoch 2000.0 and their rates from ITRF2005 to ITRF2000 (ITRF2000 minus ITRF2005)
$\fig2$: Sites used in the estimation of the transformation parameters between ITRF2005 and ITRF2000
#### Acknowledgement
Altamimi, Z., X. Collilieux, J. Legrand, B. Garayt, and C. Boucher (2007), ITRF2005: A new release of the International Terrestrial Reference Frame based on time series of station positions and Earth Orientation Parameters , J. Geophys. Res., 112, B09401.
|
2022-05-20 22:39:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5828696489334106, "perplexity": 6427.588614999642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534693.28/warc/CC-MAIN-20220520223029-20220521013029-00503.warc.gz"}
|
https://physics.stackexchange.com/questions/491748/in-special-relativity-is-mass-just-a-measure-of-all-other-energy-than-kinetic
|
In special relativity is mass just a measure of all other energy than kinetic?
The energy momentum equation in special relativity is: $$E^2=(pc)^2+(mc^2)^2.$$ and it holds for a moving but not accelerating object. One special case is the massless photon: $$E=pc.$$ And another one is a resting object: $$E=mc^2.$$ The first term in the energy momentum equation seems to be kinetic energy of the object as a whole? That should mean that the second term is all other forms of energy? This could include kinetic energy of the constituents of the object. So a full battery for instance has more energy than an empty battery and therefore has a larger mass? Likewise a hot kilogram prototype has more energy than a cold one and therefore has larger mass? This thermic energy is kinetic energy at constituent level but not for the object as a whole.
In classical mechanics mass is a property of an object that has to do with its inertia. We could define $$m=\frac{F}{a}$$. In classical mechanics an empty battery has the same mass as a full one and a cold kilogram prototype has the same mass as a hot one.
Does this not mean that the concept of mass has changed in special relativity and now is a measure of all energy except kinetic energy of an object?
If so $$E=mc^2$$ does not necessarily predict the atomic bomb? Instead of it saying that all objects have some intrinsic energy due to their mass it could say that an absolutely cold still object has no mass since it has no energy. Which sounds absurd.
So I am guessing that the old concept of mass from classical mechanics is an approximation to the new concept of mass in special relativity?
• Have a look hyperphysics.phy-astr.gsu.edu/hbase/Relativ/vec4.html . The third equation is not used in particle physics, because it is misleading, it is the inertial mass acquired when particles reach close to the velocity of light. What is used is the "invariant mass" , the "length" of the four vector of a particle , or a system of particles. Vector algebra is used. – anna v Jul 15 '19 at 14:12
• So far you have only considered an isolated particle. If you extend the consideration to system you’ll find that the mass of a system can include some of the kinetic energy of its constituents. – dmckee --- ex-moderator kitten Jul 15 '19 at 14:26
• In relation to @dmckee's point, when you say that a hot object has more mass than an otherwise identical cold object, you are actually pointing to the fact that some fraction of the mass of a system comes from some of the kinetic energy of its constituents. – Dvij D.C. Jul 15 '19 at 14:29
• Yes you are right. I have updated my question. The kinetic energy due to the momentum of the system as a whole is the firsts term. – Andy Jul 15 '19 at 14:33
• @ZachMcDargh We do not empirically see that the mass of an object does not depend on temperature. It very much does. – Dvij D.C. Jul 15 '19 at 14:40
Yes, in special relativity, the mass of a system is synonymous with the energy of the system in a frame where its momentum is zero. This, as you observe, would directly follow from the relation $$E^2=p^2+m^2$$. I will drop the factors of $$c$$ for convenience (or, in other words, I will use natural units and set $$c=1$$). Thus, in spirit, saying that the mass is a measure of all energy except the kinetic energy is correct with a couple of caveats:
• As you already notice in the updated version of your question, a many-particle system can have motion of its constituents which do not contribute to the overall momentum of the system but do contribute to the overall energy. And thus, they contribute to the mass of the system as a whole. Thus, the mass of the system as a whole does include contributions from kinetic energy but only from the kinetic energy that doesn't contribute to the overall momentum of the system.
• Due to the quadratic nature of the relation $$E^2=p^2+m^2$$, it is a bit problematic to directly identify $$p^2$$ with the kinetic energy of the system. Rather, the kinetic energy would be $$\sqrt{p^2+m^2}-m$$ which can be approximated to be $$\frac{p^2}{2m}$$ as usual for values of $$p$$ that are very small as compared to $$m$$. If you naively identify the $$p^2$$ as the kinetic energy, you wouldn't recover the correct non-relativistic limit.
Now, all your claims such as a hot cup of coffee having more mass than an otherwise identical but cold cup of coffee are true. However, this doesn't mean that the notion of mass doesn't anymore correspond to the property of inertia. Relativity doesn't change the notion of mass completely--it rather corrects it while unifying it with the notion of energy. In particular, it would be more difficult to accelerate a hot cup of coffee than a cold one if you can measure all the minuscule effects. So, the notion of mass in relativity is yet very much representative of the quality of inertia. The way to see this is to write down the expressions for momentum and energy in relativity. As you probably know, in relativity, $$p=\frac{mv}{\sqrt{1-v^2}}$$$$E=\frac{m}{\sqrt{1-v^2}}$$As you can see, it is the same $$m$$ that enters the formula for the energy also enters the formula for momentum. Thus, the same $$m$$ that represents both the measure of the energy in the rest frame (via entering the formula for energy) and the property of inertia (via entering the formula for momentum). This is the most basic conceptual unification represented in relativity and the genius of Einstein--the (rest) energy of a system is not independent of its inertia but two are the very same thing.
Now, finally, all of this doesn't mean that a cold object at rest shouldn't have any energy at all. It can very well have all sorts of reasons to have rest energy (and thus, mass). For example, even if all the constituents of a system are at rest and there is no interaction potential energy among them, the system as a whole would still have mass but it would simply be the sum of the masses of all its constituents. So, an object whose constituents are at rest simply means that its mass would not have contributions from the kinetic energy of its constituents. More importantly, there cannot be a massless system with no momentum (i.e., a system known to be rest must have mass). If something has no mass then having no momentum implies that its energy is also zero and this simply means that there is nothing.
• Thank you very much for explaining this! I take it what Einstein did could be summarized somewhat simplified this way: throw out mass conservation but fix Newtons equations by introducing a gamma correction factor + modifying the definition of mass slightly? That way he managed to unify classical mechanics and electromagnetics? – Andy Jul 15 '19 at 15:45
• @Andy Yes, you could say that. However, in relativity, mass conservation still applies--just not with the naive understanding of mass. For example, naively, when an electron and a proton combine into an atom, the mass of the atom is lower than the sum of the masses of the proton and the electron before they combined. But, the mass of the electron+proton system would be lower than the sum of their masses even before getting combined to form an atom due to the potential energy between them. [...] – Dvij D.C. Jul 15 '19 at 15:59
• [...] Naively, we ignore this correction and take the mass of the whole system to be the sum of the masses of the free proton and the free electron. But, when they combine to form an atom, we can directly measure the mass of an atom as a single particle and this correction (which we should have considered all along) becomes manifest. In mathematical language, you can see that mass is obviously conserved in relativity in this way: the momentum four-vector $p^{\mu}=(E,\vec{p})$ is conserved. Thus, its norm, $E^2-p^2$ is conserved. But, that is precisely the mass as $E^2=p^2+m^2$. – Dvij D.C. Jul 15 '19 at 15:59
As per our currently accepted model, the SM, there are elementary point particles, with no internal structure, no spatial extension.
There are two types:
1. massless, like the photon, gluon
2. particles with rest mass, like the electron and quarks
Now the rest mass of the electron is an intrinsic property, theoretical, calculated, nobody actually measured an electron at rest.
If you put a photon into a box with massless walls, the box gains rest mass. Why? because the photons in it are exerting pressure on the walls.
Now matter, is built up by elementary particles. Quarks, and massless gluons build up neutrons and protons. Only 1% of the mass of the neutrons and protons comes from the rest mass of the quarks. 99% is the gluons (massless) bonding energy, because they are confined into the neutrons and protons (like photons in a box).
Neutrons and protons build up nucleuses. Some of the mass of the nucleus comes from the bonding energy between the neutrons and protons (massless pions bonding energy, like photons in a box). The rest comes from the rest mass of the neutrons and protons. As per the correct comments, as you go to higher levels (nucleus, atoms, molecules), the binding energy actually becomes negative in the mass of the total QM system.
But the main point is, that these nuclei's rest mass is mostly from the binding energy of the gluons inside the nucleons.
Atoms are built up by nuclei and electrons. The rest mass of these adds some part of the atom, but the bonding energy is again there (actually it is negative).
But here too, most of the rest mass of the atoms are made up of the binding energy of the gluons inside the nucleons.
As you build atoms into molecules, the rest mass of the atoms is there, but the covalent bond adds to the rest mass of the molucule too (negative).
Here too, most of the rest mass of the molecule is made up of the binding energy of the gluons inside the nucleons.
As you go closer to the macro world, less comes from bonding energy (confined massless particles like the photon in the box) and more from the rest mass of the constituents. The closer to QM, the more comes from the confinement of massless particles.
In relativistic QM, gluons are traveling at speed c. Their bonding energies give a lot to the mass of neutrons and protons. You are right, SR is compatible with QM.
Mass at the lowest scale is mostly just these massless particles (moving at relativistic speeds) confinement (bonding). Quarks rest mass adds only very little. As per our current model, the SM, these quarks and electrons do have an intrinsic rest mass (that we explain by Higgs). Maybe one day when we will have figured out what these are made up of, we will know that the rest mass of these are just confined massless particles too (strings).
• A hydrogen atom has slightly less mass than a free electron and a proton. That means that there is energy to "harvest" by combining free electron and protons into hydrogen: fusion? If I understand you correctly there is a lot more energy to be "harvested" by combining quarks into electrons and protons? Alternatively it takes a crazy amount of energy to split into quarks: CERN collider? – Andy Jul 15 '19 at 15:19
• This answer feels like a bunch of disconnected facts thrown against the wall in hopes that some will stick, and it has some wrong (or at least misleading) statements in it. Pointedly, for nuclear, atomic, and molecular systems the binding energy is negative (reduces the system mass to less than the sum of the masses of the components). This makes lines like " Most of the mass of the nucleus comes from the bonding energy between the neutrons and protons" simply wrong. – dmckee --- ex-moderator kitten Jul 15 '19 at 18:10
• @Andy yes you are correct, most of the mass of the matter comes from binding energy. – Árpád Szendrei Jul 15 '19 at 18:14
• @Andy There's a lot of confusion in your statements :) Combining a free electron and a proton into hydrogen isn't fusion, it's just chemistry. Both quarks and electrons are different elementary particles. This still releases energy, mind you. But the same principle does apply to fusion as well - combine a free proton and a free neutron, and you get a deuteron and a lot of energy. Combine two deuterons, and you get helium-4, and huge loads of energy. It works for fission in reverse - split a Uranium atom, and you get two daughter nuclei, some free neutrons and lots of energy. – Luaan Jul 16 '19 at 8:13
• This doesn't even seem to try to answer the question. OP wants to know about mass in SR, but you're giving a response on, as others noted, a hodgepodge of SM factoids. – Kyle Kanos Jul 16 '19 at 12:48
|
2021-06-25 10:19:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5401822924613953, "perplexity": 274.1615872501374}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630081.36/warc/CC-MAIN-20210625085140-20210625115140-00071.warc.gz"}
|
https://www.sparrho.com/item/non-perturbative-gravity-and-the-spin-of-the-lattice-graviton/938b13/
|
# Non-Perturbative Gravity and the Spin of the Lattice Graviton
Research paper by Herbert W. Hamber, Ruth M. Williams
Indexed on: 06 Jul '04Published on: 06 Jul '04Published in: High Energy Physics - Theory
#### Abstract
The lattice formulation of quantum gravity provides a natural framework in which non-perturbative properties of the ground state can be studied in detail. In this paper we investigate how the lattice results relate to the continuum semiclassical expansion about smooth manifolds. As an example we give an explicit form for the lattice ground state wave functional for semiclassical geometries. We then do a detailed comparison between the more recent predictions from the lattice regularized theory, and results obtained in the continuum for the non-trivial ultraviolet fixed point of quantum gravity found using weak field and non-perturbative methods. In particular we focus on the derivative of the beta function at the fixed point and the related universal critical exponent $\nu$ for gravitation. Based on recently available lattice and continuum results we assess the evidence for the presence of a massless spin two particle in the continuum limit of the strongly coupled lattice theory. Finally we compare the lattice prediction for the vacuum-polarization induced weak scale dependence of the gravitational coupling with recent calculations in the continuum, finding similar effects.
|
2021-02-26 01:44:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5032070279121399, "perplexity": 535.631180548566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355944.41/warc/CC-MAIN-20210226001221-20210226031221-00101.warc.gz"}
|
https://solvedlib.com/the-equivalent-inductance-of-the-circuit-shown,317842
|
# The equivalent inductance of the circuit shown below is 10H 10 H 30 H 12 H...
###### Question:
The equivalent inductance of the circuit shown below is
10H 10 H 30 H 12 H 18 H a. 40 H b. 26 H c. 20 HH d. 6H
#### Similar Solved Questions
##### Question 3: (45 marks] Suppose the price-setting equation is given by P= (1+mW where m is...
Question 3: (45 marks] Suppose the price-setting equation is given by P= (1+mW where m is the markup. The wage-setting equation is given by W = pe? where z are unemployment benefts and u is the unemployment rate. 1. Derive the real wage and unemployment consistent with equilibrium in the labor marke...
##### A weight W= 500 N is suspended from two identical cables AB and BC, each with a weight of q= 10 N...
A weight W= 500 N is suspended from two identical cables AB and BC, each with a weight of q= 10 N/m. If the maximum horizontal force allowed on the support cable is of 800 N, find: a) The shortest possible length of cable ABC. b) The height h on the previous situation. a= 20 m....
##### GloArt, Inc. entered into the following transactions during January a) Received $30,000 for January services. b)... GloArt, Inc. entered into the following transactions during January a) Received$30,000 for January services. b) Provided S10,000 in design services on account c) Collected $8,000 from customers on account. d) Collected$10,000 in advance on a prepaid contract. e) Provided $3,000 of design services ... 1 answer ##### Name 1. Give a correct name for each of the following compounds. Include stereochemistry when appropriate.... Name 1. Give a correct name for each of the following compounds. Include stereochemistry when appropriate. CH3 H -OH -Br H- Br CH2CH3 HO OH 2. Draw the following compounds १०ofion) lii... 5 answers ##### Maximize production, Q = Sx03y0.8, where X and are quantities of raw materials and the total quantity of these raw materials cannot be more than 110.Round your answers tO three decimal places_Maximum production QNumberwill be reached when XNumberand yNumber Maximize production, Q = Sx03y0.8, where X and are quantities of raw materials and the total quantity of these raw materials cannot be more than 110. Round your answers tO three decimal places_ Maximum production Q Number will be reached when X Number and y Number... 1 answer ##### Machinery was acquired at the cost of$500,000. It has a useful life of 10 years...
Machinery was acquired at the cost of $500,000. It has a useful life of 10 years and a salvage value of$50,000. Calculate the amount of depreciation of each of the first three years using the straight line depreciation method and declining balance method...
##### A British company is sending you measurements for a table to build in your factory
A British company is sending you measurements for a table to build in your factory. The table top must be 50 cm × 75 cm. Your machine calibrates in inches. Find the size of the table in inches. (1 cm = .39 in)...
##### Assume that you have a standard deck of 52 cards (jokers have been removed). (a) What...
Assume that you have a standard deck of 52 cards (jokers have been removed). (a) What is the information associated with the draw- ing of a single card from the deck? (b) What is the information associated with the draw- ing of a pair of cards, assuming that the first card drawn is replaced in the d...
##### Answer in simplest form -5/7(2/5) give me step by step
answer in simplest form-5/7(2/5)give me step by step...
##### (Please Print) Chemistry 112 Buffers, Entropy and Free Energy Event Please abide-by -the -Spelman-College Academic-Integrity Policy....
(Please Print) Chemistry 112 Buffers, Entropy and Free Energy Event Please abide-by -the -Spelman-College Academic-Integrity Policy. Show-work-on questions requiring calculations. More credit is likely to be earned when explanations are legible and concise. 1. Complete the following table on reactio...
##### If a = 30 cm, b = 20 cm, q =+4 nC, and Q = -8.5 nC in the figure; what is the potential difference VA ~VB ((in units of volts). ?ABSelect one:A. -51.46B. -54.00C.163.75D. 127.50E. 150.00
If a = 30 cm, b = 20 cm, q =+4 nC, and Q = -8.5 nC in the figure; what is the potential difference VA ~VB ((in units of volts). ? A B Select one: A. -51.46 B. -54.00 C.163.75 D. 127.50 E. 150.00...
##### 5. (30 points) A consumer group hoping to assess customer experiences with auto dealer sur- veys 160 people who recently bought new cars; 12 of them expressed dissatisfaction with the salesperson...
5. (30 points) A consumer group hoping to assess customer experiences with auto dealer sur- veys 160 people who recently bought new cars; 12 of them expressed dissatisfaction with the salesperson. We want to learn about the general opinion towards the salesperson. (a) (6 points) What is the meaning ...
##### WileyPLUSI MyWileyPLUS I Hele I Contect Us I Los Weygandt, Accounting Principles, 13e PRINCIPLES OF ACCT...
WileyPLUSI MyWileyPLUS I Hele I Contect Us I Los Weygandt, Accounting Principles, 13e PRINCIPLES OF ACCT 1&II (ACC 114/1 tice Assignment Gradebook ORION Downloadable eTextbook ament CALCULATOR FULL SCREEN PRINTER VERSION BACK NEXT Exercise 3-15 Pharoah Quest Games Co. adjusts its accounts annual...
##### CH3CH2CH2CN can be synthesized by an SN2 reaction. Draw the structures of the alkyl chloride and...
CH3CH2CH2CN can be synthesized by an SN2 reaction. Draw the structures of the alkyl chloride and nucleophile that will give this compound in highest yield. Use the wedge/hash bond tools to indicate stereochemistry. Include all valence lone pairs in your answer. Separate multiple reactants using the ...
##### A 25.2 kg person climbs up a uniform ladder with negligible mass. The upper end of...
A 25.2 kg person climbs up a uniform ladder with negligible mass. The upper end of the ladder rests on a frictionless wall. The bottom of the ladder rests on a floor with a rough surface where the coefficient of static friction is 0.21. The angle between the horizontal and the ladder is 0. The perso...
##### Questlon 0f 36 eeuue hydrogen al &xygen can be prepardIlie laboratory [rm Ihe decumpositiongascousThe cquation - thc rcuction2H,O(g)2H,(2)+ 0,(g)Calculalc how' many EraMS of = (g) can be prluced from 41.2 g H,O(g)
Questlon 0f 36 eeuue hydrogen al &xygen can be prepard Ilie laboratory [rm Ihe decumposition gascous The cquation - thc rcuction 2H,O(g) 2H,(2)+ 0,(g) Calculalc how' many EraMS of = (g) can be prluced from 41.2 g H,O(g)...
##### A rectangular piece of plywood 1200. mm by 2400. mm is cut fromone corner to an opposite corner. What is the smallest anglebetween edges of the resulting pieces?
A rectangular piece of plywood 1200. mm by 2400. mm is cut from one corner to an opposite corner. What is the smallest angle between edges of the resulting pieces?...
##### Find Cxx(x,y) if C(xy)=4x2_ 12xy - 8y2 9x 10y _ 112 Cxx(x,y)
Find Cxx(x,y) if C(xy)=4x2_ 12xy - 8y2 9x 10y _ 112 Cxx(x,y)...
##### Find the value of ZaClick the icon to view a table of areas under the normal curve.(Round to two decimal places as needed )70.3570.35
Find the value of Za Click the icon to view a table of areas under the normal curve. (Round to two decimal places as needed ) 70.35 70.35...
##### What is the slope of any line perpendicular to the line passing through (3,-4) and (2,-3)?
What is the slope of any line perpendicular to the line passing through (3,-4) and (2,-3)?...
##### For lhe convergent altemaling seriesoyalualetiho nin naria Sum lor n=? Then find (Jk+2)"upper bound (or Ihe enotUsing tne nin paruaasumate the vame clthe seriesTha nth partal sum for tha glven valva ofn /5 (Uype an Inleget cedmal_Rouna Reven decimal Blacesnauded;|
For lhe convergent altemaling series oyalualetiho nin naria Sum lor n=? Then find (Jk+2)" upper bound (or Ihe enot Using tne nin parua asumate the vame clthe series Tha nth partal sum for tha glven valva ofn /5 (Uype an Inleget cedmal_Rouna Reven decimal Blaces nauded;|...
##### The storm runoff X (in cubic meters per second, m?/s) from a subdivision can be modeled by random variable with the following probability density function: f(x) = c[ 14 - (x-2.5)2] for 0 <x < 6 a) Determine the constant cand sketch the pdf: b) Give a formula for the cumulative distribution function of the runoff Xand sketch its graph: The runoff is carried by a pipe with a capacity of 4.5 m/s. Overflow will occur when the runoff exceeds the pipe capacity. If overflow occurs after a st
The storm runoff X (in cubic meters per second, m?/s) from a subdivision can be modeled by random variable with the following probability density function: f(x) = c[ 14 - (x-2.5)2] for 0 <x < 6 a) Determine the constant cand sketch the pdf: b) Give a formula for the cumulative distributio...
##### Determine whether the given function has any local or absolute extreme values, and find those values if possible.$f(x)=|x-1| ext { on }[-2,2]$
Determine whether the given function has any local or absolute extreme values, and find those values if possible. $f(x)=|x-1| \text { on }[-2,2]$...
##### What is the absolute magnitude of the rate of change for [NH ] if the rate of change for [H] is 7.30 Ms in the reaction 2 NH,(g) N,(g) +3 H,(g)?
What is the absolute magnitude of the rate of change for [NH ] if the rate of change for [H] is 7.30 Ms in the reaction 2 NH,(g) N,(g) +3 H,(g)?...
##### Question 6 (0.5 points) You combine 0.65g of agarose with 65ml of TAE to make a 1% agarose gel: What is the solute in this solution?TAEWaterAgaroseIt isn't specifiedAll of these are solutes
Question 6 (0.5 points) You combine 0.65g of agarose with 65ml of TAE to make a 1% agarose gel: What is the solute in this solution? TAE Water Agarose It isn't specified All of these are solutes...
##### Perform a rotation of axes to eliminate the xy-term, and sketch the graph of the conic.$5 x^{2}-6 x y+5 y^{2}-12=0$
Perform a rotation of axes to eliminate the xy-term, and sketch the graph of the conic. $5 x^{2}-6 x y+5 y^{2}-12=0$...
##### During an athletics training session, Jenny runs 12 laps of a 400-meter track. She wants to run a total of 8 km. How many more laps does she need to run?
During an athletics training session, Jenny runs 12 laps of a 400-meter track. She wants to run a total of 8 km. How many more laps does she need to run?...
##### Math 227, Spring 2018, H08 ( 24248 ) Masooda Mokhlis ? | 5/22/18 8:55 PM Test:...
Math 227, Spring 2018, H08 ( 24248 ) Masooda Mokhlis ? | 5/22/18 8:55 PM Test: Chapter 7 TEST Time Limit: 03:00:00 Submit Tes This Question: 1 pt 3 of 20 (2 complete) This Test: 20 pts possible Match each P-value with the graph that correctly displays its area P 0.0119 p 0 0985 p 0 0238 P 0.1977 P ...
##### Aa ~ | P 2-4NormalNo SpacingHeading334a 1 4USING THE PROTEIN DATABASE:Range of the protein product Leugth ofthe protein product Checkpoint 4 Does your protein have mature peptide_ signal sequence andor proprotein?YesFill out the following table: Congratulations, YOu have finished![s thene aun peptide' ILeS_What _ rnge ILLES_What its length? Is thcte signal scquencel ILes_uhat Hts nnge Ives_what Jength? Us thene ptopolein? Nnpel ILxes,what Iength" Ifyes. what the rnge of the pIO sequenc
Aa ~ | P 2-4 Normal No Spacing Heading 334a 1 4 USING THE PROTEIN DATABASE: Range of the protein product Leugth ofthe protein product Checkpoint 4 Does your protein have mature peptide_ signal sequence andor proprotein? Yes Fill out the following table: Congratulations, YOu have finished! [s thene a...
##### 5 ders OFer (15 tn cecinral pont points) Assume that 60% of patrons in shopping mall are female, Retired Mr. Johnson counted 50 patrons passing by him when he was drinking fresh coffee He found that only 25 of the 100 patrons he counted was male_ Assume this is random sample and the true percent of male patrons was 40%. Was Mr. Johnson' observation very unlikely: Explain your conclusion:WE WILL use 5% as cutoff level of likelihood_ Find the probability that Jess than or equal 25 male patron
5 ders OFer (15 tn cecinral pont points) Assume that 60% of patrons in shopping mall are female, Retired Mr. Johnson counted 50 patrons passing by him when he was drinking fresh coffee He found that only 25 of the 100 patrons he counted was male_ Assume this is random sample and the true percent of ...
##### 16) What are the X intercepts? Select all correct ariswers17) What is the y-intercept? (4pts)
16) What are the X intercepts? Select all correct ariswers 17) What is the y-intercept? (4pts)...
##### Couzidr Ir: 'marpsirgwiih Fitjc(0} (JD points] Fintl jv: inge ol thhe line Jin(t) = -luder chiz Turping_ [10 poizts] Shozw thal Whis icnog? iz hc citcle 6-9-{ wlzce Re(") end] & In(o}
Couzidr Ir: 'marpsirg wiih Fitjc (0} (JD points] Fintl jv: inge ol thhe line Jin(t) = -luder chiz Turping_ [10 poizts] Shozw thal Whis icnog? iz hc citcle 6-9-{ wlzce Re(") end] & In(o}...
|
2022-12-05 13:39:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4728850722312927, "perplexity": 8293.297598277479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711017.45/warc/CC-MAIN-20221205132617-20221205162617-00178.warc.gz"}
|
https://cs.stackexchange.com/questions/116608/remove-a-vertex-from-a-graph-keeping-shortest-path-distance-same
|
# Remove a vertex from a graph keeping shortest path distance same
How could we delete an arbitrary vertex from a directed weighted graph without changing the shortest-path distance between any other pair of vertices? We are allowed to reweight the edges.
• This should be pretty simple. What have you tried? Have you tried solving this for just a small number of nodes? – BlueRaja - Danny Pflughoeft Nov 2 '19 at 22:54
• I think if there is a path from u->x->v where x is to be deleted then connecting u->v by d(u,x)+d(x,v) should solve the problem. Actually, I have 2 follow up questions on this but want to solve them by myself so did not post the entire question here. 8th question in excercises if anyone is curious jeffe.cs.illinois.edu/teaching/algorithms/book/09-apsp.pdf – Deep Bodra Nov 2 '19 at 23:09
• You should specify in your question that you're allowed to add/reweigh edges, if that's the case. – Steven Nov 2 '19 at 23:13
• @Steven It is not mentioned in the question either. I was learning graph theory and found a bunch of interesting questions. See my previous comment for the link to the original question – Deep Bodra Nov 2 '19 at 23:17
• But it is specified in the question... The source you linked says: "Describe an algorithm that constructs a directed graph $G' = (V \setminus \{v\}, E')$ with weighted edges, such that the shortest-path distance between any two vertices in $G'$ is equal to the shortest-path distance between the same two vertices in $G$, in $O(V^2)$ [sic] time." – Steven Nov 2 '19 at 23:20
|
2020-08-15 17:55:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6320855021476746, "perplexity": 436.23551519106525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740929.65/warc/CC-MAIN-20200815154632-20200815184632-00333.warc.gz"}
|
https://proxieslive.com/tag/random/
|
## Random variate generation in Type-2 computability
Is there any existing literature on applying the theory of Type-2 computability to the generation of random variates? By “random variate generator” I mean a computable function $$f\colon\subseteq\{0,1\}^{\omega}\rightarrow D$$ such that, if $$p$$ is a random draw from the standard (Cantor) measure on $$\Sigma^{\omega}$$, then $$f(p)$$ is a random draw from a desired probability distribution on $$D$$. Think of $$f$$ as having access to an infinite stream of random bits it can use in generating its output value. Note that $$f$$ need not be a total function, as long as its domain has (Cantor) measure 1.
It seems to me that the way to proceed would be to require that one specify a topology on $$D$$, in fact a computable topological space [1] $$\boldsymbol{S}=(D, \sigma, \nu)$$ where $$\sigma$$ is a countable subbase of the topology and $$\nu$$ is a notation for $$\sigma$$, and use the standard representation $$\delta_{\boldsymbol{S}}$$ of $$\boldsymbol{S}$$. One might also want membership in the atomic properties $$A\in\sigma$$ to be “almost surely” decidable; that is, there is some computable $$g_A\colon\subseteq\{0,1\}^{\omega}\rightarrow\{0,1\}$$ whose domain has measure 1, such that
$$g_A(p) = 1 \mbox{ iff } f(p)\in A$$
whenever $$p\in\mathrm{dom}(g_A)$$.
I’m working on a problem that needs a concept like this, and I’d rather not reinvent the wheel if this is a concept that has already been well explored.
[1] See Definition 3.2.1 on p. 63 of Weihrauch, K. (2000), Computable Analysis: An Introduction.
## Generating trusted random numbers for a group?
Alice and Bob need to share some cryptographically-secure random numbers. Alice does not trust Bob, and Bob does not trust Alice. Clearly, if Alice generates some numbers, and hands them to Bob, Bob is skeptical that these numbers are, in fact, random, and suspects that Alice has instead generated numbers that are convenient for her.
One naive method might be for each of them to generate a random number, and to combine those numbers in some way (e.g. xor). Since they must be shared, and someone has to tell what theirs is first, we might add a hashing scheme wherein:
1) Alice and Bob each generate a random number, hash it, and send it the hash to the other (to allow for verification later, without disclosing the original number). 2) When both parties have received the hash, they then share the original number, verify it, xor their two numbers, and confirm the result of the xor with each other.
However, this has a number of problems (which I’m not sure can be fixed by any algorithm). Firstly, even if Alice’s numbers are random, if Bob’s are not, it is not clear that the resulting xor will then be random. Secondly, I’m not certain that the hashing scheme described above actually solves the “you tell first” problem.
Is this a viable solution to the “sharing random numbers in non-trust comms” problem? Are there any known solutions to this problem that might work better (faster, more secure, more random, etc)?
## Two random variables that sum up to user-defined value [closed]
What I am looking for is to create an application so 6 year old can learn math. This application should generate random examples, like:
1 + 6 5 + 4 3 + 1 0 + 10 9 + 1
The sum should never exceed 10. Cases like “9 + 1” and “1 + 9” are two different cases.
I tried to generate random numbers the following way (JavaScript):
const getRandomInt = (min, max) => { min = Math.ceil(min); max = Math.floor(max); return Math.floor(Math.random() * (max - min + 1)) + min; } const hh1 = {}; const hh2 = {}; for (let i = 0; i < 10000; i++) { const x = getRandomInt(0, 9); const y = getRandomInt(0, 9 - x); if (!hh1[x]) { hh1[x] = 1; } else { hh1[x] = hh1[x] + 1; } if (!hh2[y]) { hh2[y] = 1; } else { hh2[y] = hh2[y] + 1; } }
But it obviously didn’t work, result is:
> hh1 { '0': 1005, '1': 1037, '2': 952, '3': 951, '4': 1048, '5': 986, '6': 1025, '7': 1060, '8': 992, '9': 944 } > hh2 { '0': 2850, '1': 2009, '2': 1438, '3': 1092, '4': 821, '5': 643, '6': 488, '7': 328, '8': 225, '9': 106 }
First number looks random, the the second isn’t. Zero appears 10 times more than 9, for example. One way to fix is to generate all pairs and pick from the pair. But I’m wondering if there is a better way.
## Generalization of a Markov random field and a Bayesian network?
I am seeking a graphical model that is a generalization of both a Markov random field (MRF) and a Bayesian network (BN).
From the Markov random field wiki page:
A Markov network or MRF is similar to a Bayesian network in its representation of dependencies; the differences being that Bayesian networks are directed and acyclic, whereas Markov networks are undirected and may be cyclic. Thus, a Markov network can represent certain dependencies that a Bayesian network cannot (such as cyclic dependencies); on the other hand, it can’t represent certain dependencies that a Bayesian network can (such as induced dependencies).
From the above description, particularly the last sentence, it appears that neither MRFs nor BNs are more general than the other.
Question: Is there a graphical model that encompasses both MRFs and BNs?
I believe such a graphical model will need to be directed so as to be able to model the (undirected) dependencies in a MRF (by included a directed edge in each direction).
## Graph with random edge weights c++
how I can read a graph with random edge weights from a file, and then store the graph information in an adjacency matrix? c++
## How can I handle players who want to browse shops at random?
If one or more of my players decide to go on a shopping spree, I’ve previously had problems describing the shop’s inventory, without going into much detail or making it seem like the shop has only 2 items.
Of course, I can describe the atmosphere and the general nature of the inventory (e.g. “herbs” or “jewelry”), and that’s just fine if the players walk into the shop looking for something specific, such as an herb that stops bleeding or a silver necklace with a sapphire embedded into it.
However, I’m unsure how to handle players that recently noticed “Hey, I’ve got 500 gold floating around that I want to spend on useless stuff”. Or, in other words, players that want to look around for random shops with a random inventory to see if they find something of interest.
In the real world, this works, because you can literally walk into a random shop and look around to see if there’s anything interesting. In D&D, the DM has to come up with something, and it’s boring and frustrating for the players if it’s always the same things.
So, what can I do to make random shopping interesting for the players, without for example preparing huge inventory lists in advance?
## Why my mobile sends SMS to a random number without my permission?
My mobile sends SMS to random numbers without my permission. All messages are sent automatically. Here is an example of an SMS.
SBIUPI qUrXgeX26iEY%2B2si9JHhubIjm7R2aHoo6pWcbXBpJho%3D
All the receivers are Indian numbers (mostly airtel sim card). When I check the Truecaller app I found those numbers are named Cybercrime Frauds and reported as spam by more than 18000 peoples.
I have checked my app SMS permissions and checked for hidden apps. I couldn’t find anything harmful.
Is there anything to afraid?
## Anonymous (privacy-preserving) random walks for graphs
Quoting this paper – SmartWalk (https://dl.acm.org/doi/pdf/10.1145/2976749.2978319):
For graph privacy, strong link privacy relies on deep perturbation to the original graph, indicating a large random walk length. However, as the fixed random walk length increases, the perturbed graph gradually approaches to a random graph, incurring a significant loss of utility.
They propose a machine-learning based approach to determining the appropriate random walk length as a trade-off between utility and security/privacy. However, is there (at all) an anonymous or privacy-preserving method of conducting a random walk itself?
## Random walk problem in matlab
Can you write to me code matlab to this problem? I try many times and dont succses Problem 2.1. First-passage time. The first-passage time (FPT) is defined as the time it takes for a random walker to reach a certain target position for the first time. Consider a special case of the first-passage time: the first-return time, i.e. the time at which the random walker first returns to the origin. Let us denote the probability that this would happen at time t as F(t). Using your random walk data from the previous week (or generating new data, if you prefer), please make a histogram of first-return times of 105 symmetric random walks with step size ∆x = 1. Please do it for random walks whose duration is 104 steps, and also 105 steps. Plot the probability F(t) as a function of time. Now do the same but on a log-log scale. Use this to infer how F(t) falls with time for t >> 1.
## Amount of expected loop iterations when searching an array by random index
Lets say we have an array A of size n. It has 1 as its first index and n as its last index. It contains a value x, with x occurring k times in A where 1<=k<=n
If we have a search algorithm like so:
while true: i := random(1, n) if A[i] == x break
random(a,b) picks a number uniformly from a to b
From this we know that the chances of finding x and terminating the program is k/n with each iteration. However what I would like to know is what would be the expected value for the number of iterations or more specifically the amount of times the array was accessed in this program given the array A as described above.
|
2020-05-29 19:07:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5641088485717773, "perplexity": 756.1496784557036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347406365.40/warc/CC-MAIN-20200529183529-20200529213529-00203.warc.gz"}
|
https://trac-hacks.org/ticket/7468
|
Opened 7 years ago
Closed 7 years ago
# [patch] Layout of page with a pullout (sidebar) macro is affected
Reported by: Owned by: Ryan J Ollos Steffen Hoffmann normal WikiTicketCalendarMacro major CSS div table width 0.11
### Description
The issue is demonstrated in the following two screen captures.
Here is my home page without the WikiTicketCalendarMacro at that bottom of the page. You can see that I have a sidebar from the FullBlogPlugin displayed near the top of the page, and some wiki tables to the left.
Here is my home page with the WikiTicketCalendarMacro at the bottom of the page. You can see that the wiki tables are no longer displayed to the left of the FullBlogPlugin sidebar, but have been pushed down the page.
I can't say for certain that this is a problem with the WikiTicketCalendarMacro, but I did not observe this behavior with release 0.8.5 and 0.8.5 beta versions. The problem appeared upon installing 0.8.6.
### comment:1 Changed 7 years ago by Ryan J Ollos
Note: I accidentally mixed up the order of the attachments and description in the Ticket Description.
### comment:2 follow-up: 3 Changed 7 years ago by Steffen Hoffmann
Keywords: CSS div width added normal → major new → assigned
Looks ugly indeed, so at least +1 for severity.
Oh yeah, that could very well be the case. Maybe I did some changes regarding x-axis scaling in between. I'll take a look at this, but with your info this is already narrowed down considerable.
But wait, since I fixed CSS lately, a broken behavior might have been there for longer, but wasn't observable before. - Confirmed. This seems like I added width="100%" to surrounding div-section in changeset [8263] for 0.8.4, but the flawed CSS at the same time.
I'll provide a patch tomorrow, and fix this in repo then, provided you'll confirm that it works for you. Thanks for testing.
### comment:3 in reply to: 2 Changed 7 years ago by Ryan J Ollos
I'll provide a patch tomorrow, and fix this in repo then, provided you'll confirm that it works for you. Thanks for testing.
Certainly, I look forward to testing the new version.
### Changed 7 years ago by Steffen Hoffmann
proposed change to remove width def suspected to cause bad HTML
### comment:4 Changed 7 years ago by Steffen Hoffmann
Summary: Layout of page with a pullout (sidebar) macro is affected → [patch] Layout of page with a pullout (sidebar) macro is affected
Please try the attached patch, or just hand-edit the small change to your WikiTicketCalendarMacro file 0.8.6. For 1.2.1 a similar change would be done in the CSS file and both committed instantly, if your test is positive. Thanks for taking care.
### comment:5 Changed 7 years ago by Ryan J Ollos
The patch seems to be working well. Thanks!
### Changed 7 years ago by Steffen Hoffmann
screenshot of WikiTicketCalendarMacro 1.2.2 collapsed to minimal size due to lack of milestone and ticket information
### comment:6 Changed 7 years ago by Steffen Hoffmann
The consequence of removing the explicit width setting from the div tag surrounding the whole HTML table is, that the calendar is displayed with minimal width that fits all content in one row (see attached screenshot image).
So without additional data the calendar now collapses significantly. However I've prepared a new version at the current state, since the initial issue is solved indeed.
This might be enough, or not. Depending on further user response it could be worth another option, i.e. table_fixed-width=<valid_CSS_size>, where you could set total width in pixel or % of page width as required. Any comments?
### comment:7 Changed 7 years ago by Ryan J Ollos
I can confirm the behavior you describe,
With data:
Without data:
### comment:8 follow-up: 9 Changed 7 years ago by Ryan J Ollos
I'd definitely prefer a setting that allows the width of the calendar to be controlled. The way it I currently use WikiTicketCalendarMacro, I'd prefer it to span the width of the page.
### comment:9 in reply to: 8 ; follow-up: 10 Changed 7 years ago by Steffen Hoffmann
I'd definitely prefer a setting that allows the width of the calendar to be controlled. The way it I currently use WikiTicketCalendarMacro, I'd prefer it to span the width of the page.
I take this a vote for my previous suggestion, right?
### comment:10 in reply to: 9 Changed 7 years ago by Ryan J Ollos
I take this a vote for my previous suggestion, right?
Yes, +1 from me. I'd find the % of page width as required option to be most immediately useful.
### comment:11 Changed 7 years ago by Steffen Hoffmann
Resolution: → fixed assigned → closed
(In [8357]) WikiTicketCalendarMacro: Add flexible calendar display width control, closes #7468.
Forced width by div HTML tag is made optional now, and CSS attribute 'min-width' should be less forceful against other page layout constraints.
### Modify Ticket
Action
as closed The owner will remain Steffen Hoffmann.
The resolution will be deleted. Next status will be 'reopened'.
|
2017-02-24 02:20:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24414049088954926, "perplexity": 4161.139945188741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171271.48/warc/CC-MAIN-20170219104611-00562-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://physics.meta.stackexchange.com/questions/12568/should-we-test-lowering-the-vote-to-close-and-reopen-threshold/12569
|
Should we test lowering the vote to close and reopen threshold?
Update
Based on the votes, we will be giving this experiment a try. We are working with the CM team now to figure out what metrics we can measure for comparisons before and during the test and we will update everybody as soon as we come up with a plan and a date to start the test.
Stay tuned!
In early August, Stack Overflow (the company) announced an experiment to lower the number of votes required to close or to reopen a question to 3 votes on Stack Overflow (the website). The trial period was 30 days, and at the end (plus time for data crunching), the results and an in-depth analysis of the experiment were posted. It was a resounding success on Stack Overflow (the website).
Shortly over a week ago, Stack Overflow (the company) announced the change to vote counts would become permanent on Stack Overflow (the website).
The Community Manager team has indicated that other sites are welcome to try out a 30 day test of the same thing if the community agrees. Several smaller sites and beta sites have started the process to undergo the test, motivated by the limited number of active curators on those sites.
Physics.SE is not a small site, but we're also not the size of Stack Overflow. We have a dedicated core group of curators (thank you!!!), but the group isn't big enough to get quick and effective action on closing or reopening questions that warrant it. A completely not scientific, observation-biased look at the close and reopen queues indicates many questions quickly get 3 or 4 close or reopen votes and then languish.
We would like to propose that Physics.SE undergo a 30 day trial of the lowered close and reopen vote counts, so it will only take 3 votes to close or reopen a question. At the end of the 30 days, we can evaluate how the test went and help Stack Overflow (the company) decide if it works on sites of our scale.
I'll leave the discussion here open for 3 weeks (we'll take a final look at the status on 3 January 2020) and if there is enough of a consensus around evaluating it, we can let the Community Management team know and they can enable the new vote count requirements for 30 days.
Is the community interested in experimenting with a 3 vote requirement for opening and closing questions?
• I'm guessing only up votes on the answers? – BioPhysicist Dec 13 '19 at 3:02
• @AaronStevens Let's use votes on the answers to help establish consensus within the discussion. – rob Dec 13 '19 at 3:38
• @rob Yes, I don't think that was ever in question :) I suppose each user gets two possible votes here then (one up and one down vote). – BioPhysicist Dec 13 '19 at 5:37
• @AaronStevens I'm not so worried about it -- we can see the +/- afterall, and we can decide if we want to use other sites' contest rules (only + counts) or something else. Ideally, it's overwhelmingly clear which one the community wants and we don't need to worry about it. – tpg2114 Dec 13 '19 at 14:30
• What would be the metrics used in this study? What criteria would be used to decide whether it was a success or not? – Emilio Pisanty Dec 13 '19 at 20:43
• @EmilioPisanty We'd have to work with the CM's to figure out how much analysis they will do, but I am particularly interested in the efficacy numbers Shog computed for the Stack Overflow test (the table appears about halfway through). I'm not sure if we can get additional metrics, but it would be nice to see how many questions that end up closed get an answer before closure when it takes 5 votes vs 3 votes, since that's a big issue with HW questions. – tpg2114 Dec 13 '19 at 22:11
• They also have measures of close/reopen wars, since this would make it easier to reopen questions also. So we can see if the community frequently disagrees on closures -- an interesting metric would be how many 5 votes to close get 3 votes to reopen now, because that would trigger a reopen during the test. And of course, the queue backlog should also go down considerably as well during the test period if it is more effective. – tpg2114 Dec 13 '19 at 22:14
• So is this status-complete or not? I mean there is significant difference in the support of and against the question. – user249968 Jan 2 '20 at 2:55
• @Kyubey As stated in the question, we will wait until 3 January 2020. Then if the consensus is to try it, we will reach out to the CM team. I'll update the community as those things progress. – tpg2114 Jan 2 '20 at 2:58
• @tpg2114 I don't see any real harm in the experiment, but I'm not convinced at all by the test suite that you've linked to (which I only managed to get to recently). The problems with closure on SO are very different to anything we have here. The focus of the post you linked to is the "effectiveness" of closure votes, i.e., on whether close-votes age away before being acted on (closure / Leave Open) by the queue or otherwise. Is that really a concern here? Do any close-votes get to age away on this site, before the question makes it out of the queue? That'd be deeply surprising to me. – Emilio Pisanty Jan 4 '20 at 2:53
• I do see a different set of issues, though, particularly in the speed of closure and whether it is able to prevent answers getting posted before the closure, especially for homework posts. But that needs a dedicated test suite, and I'm not sure who can build it and whether it can be performed using public SEDE data. (My gut says it probably won't be, as it requires non-public vote timing information.) – Emilio Pisanty Jan 4 '20 at 2:56
• @EmilioPisanty Just in the past hour or so, I've been talking with Shog about what metrics we can get to assess these things -- he has given me data on "answered before closure" as a function of number of votes when closed. We also have current efficacy measurements for each closure reason (at a broad level -- all off-topic are lumped, for example). I've asked for the possibility of some extra information like time spent at each vote count, not sure if we can get that yet or not. If you have other suggestions, let me know and we can see what we can get. – tpg2114 Jan 4 '20 at 3:13
• Before we launch the test, I'd like to have an idea of what metrics we can actually quantify and how we will use them, rather than trying to p-hack our way through data at the end. So I plan on posting a hypothesis/experimental approach post prior to starting the test -- although, I also don't want to influence people's behavior during the test, so I'm not sure the best way to approach that yet... – tpg2114 Jan 4 '20 at 3:13
• @tpg2114 Thanks for the update. I agree with both the p-hack and the blinding concerns. One possible solution is to agree on both overall goals and a detailed protocol with Shog9 (and hopefully also with SE's data scientists), and then post the former but seal the latter (e.g. by posting here and immediately deleting it). Very few of us are proficient enough at statistics that we'd be able to meaningfully engage with the detailed protocol, I think, and I for one would be better reassured that there's a professional statistician handling the implementation of the goals than having them public. – Emilio Pisanty Jan 4 '20 at 3:23
• @ZeroTheHero Given the downsizing of the CM staff recently, this is on hold at the moment. It also means we may not be able to get customized metrics that we were discussing with one of the departed staff members. The CM who is leading the project is aware we want to test it, and many other sites have also volunteered, but we don't know when it will happen. – tpg2114 Jan 25 '20 at 17:19
YES
Physics.SE should undergo a 30 day test with 3 votes required to close and reopen questions.
• I'm interested in why this answer got such a high score so fast. Do we have a population of users who think that the current closure system is broken? If so, how, and what are its main perceived problems? – Emilio Pisanty Dec 13 '19 at 20:46
• @EmilioPisanty the fact that we can't close OT "Do My HW For Me" fast enough is one. As TPG points out in the post (in possibly biased observations), many times posts just hover at 3 votes for a while before being closed, meanwhile OP gets some idiot to so their HW and we get the credit of being a HW help site. – Kyle Kanos Dec 13 '19 at 21:55
• I also am not sure that is categorize the want for faster/easier close/reopens as thinking the system is broken. Honestly, its just not optimal with the same 4-6 users closing questions. Maybe if the community weren't so lazy, 5 would work. But sadly it seems the majority of capable users have zero interest in reviewing – Kyle Kanos Dec 13 '19 at 21:58
• @EmilioPisanty I up voted, not because I think the current system is horrible, but I still think the test could be interesting/useful. If it's better then let's change it. If not, that's fine too. – BioPhysicist Dec 13 '19 at 22:52
• Given the number of highly suspicious questions of late speeding the closure process can only be beneficial. The OP can always clarify through edits. – ZeroTheHero Dec 13 '19 at 23:32
• I tend to agree the problem is lack of motivation (and I do VTC myself). If I see the title of "yet another do my HW question" there's no value to me of voting to close it, compared with just ignoring it. And whatever others might think, I don't personally count "making Physics.SE a more awesome Q&A site" (or even "getting more rep") any motivation at all. – alephzero Dec 14 '19 at 1:26
• .. To be accurate, I did used to VTC, until the script kiddies who maintain SE recently decided unilaterally to screw up support for my default browser, which means voting and commenting is now even more of a pain. Human nature 101: if you don't make it easy for people to do the right things, they won't do them. – alephzero Dec 14 '19 at 1:29
• @alephzero What browser? What changes make voting and commenting harder? – Kyle Kanos Dec 14 '19 at 12:16
This should be an easy answer.
Yes.
We should trial this. Stack Overflow tried it and were really happy with the results. And that's just what it is, a "trial". Rather than speculating on what the ups or downs of this could be, we have the opportunity to experimentally determine the true situation. If it doesn't work well, we don't adopt it permanently. If it works, we accept it. There's very little risk.
Aren't we supposed to be scientists? Why are we standing around talking about what could happen when we can collect real data and test the theory? I don't recall a lot of instances in science when they published a paper saying "there's a 50/50 chance that this theory would work out, so there's no point in anybody running the easy experiment to check the predictions. We'll just assume it's wrong".
Save any arguing for the decision on whether to adopt it permanently and just run the trial now. Isn't that why we do experiments after all? Because it's a lot easier to make a definitive case about something when you've got experimental data backing you up. So whatever side you're on, you want to trial it to prove to the other side that you're right.
• I agree. This was what I was trying to say on my first comment on @akhmeteli's answer – BioPhysicist Dec 17 '19 at 21:44
• OK, then let's do the science. Properly. With well-defined metrics and a well-defined control, and agreed-upon criteria for success and failure. Only then does running the experiment mean that anybody can be 'proved right' by the data. (For full disclosure: I'm guilty of having proposing uncontrolled trials here previously, and they were much less usable because of that.) – Emilio Pisanty Dec 18 '19 at 8:09
• @EmilioPisanty Love you. <3 Love all science – Jim Dec 18 '19 at 12:51
NO
Physics.SE should not undergo a 30 day test with 3 votes required to close and reopen questions.
Another method is The Silver Hammer$$^™$$.
We've had this trouble previously: A guide to moderating Physics Stack Exchange yourself: close voting. Currently Gold Tag has a Dupe Hammer, extending that to silver adds a whole bunch of people. Not just anyone but people whom are knowledgeable about the subject.
There is no reason why this couldn't be combined with a lowering to 3 or 4.
Previous Silver Hammer discussions:
• This really would only work if the questions closed were strictly duplicates, which isn't always the case. Duplicates account for about 17% of all closures (cf. 10k rep mod tools, which only works for those with >10k rep); the HW reason, meanwhile, accounts for about 40%. – Kyle Kanos Dec 21 '19 at 12:03
• @KyleKanos - Your idea (interpretation) was already discussed in The Tavern. The non sequitur is more a derailment than a helpful use of comments. – Rob Dec 21 '19 at 12:34
• How is pointing out that lowering the requirements for dupe-hammer would only impact duplicates a non sequitur?! – Kyle Kanos Dec 21 '19 at 12:51
• Or are you trying to say that my use of statistics to prove my point the non sequitur? Because those definitely help the point that most questions in the close queue are not proposed duplicates so allowing for silver dupe-hammer powers won't help the more than 80% of the cases we see regularly. – Kyle Kanos Dec 21 '19 at 16:40
• I don't see how this helps. SE is obviously uninterested in implementing any kind of "silver hammer" at this stage. Bringing it up now is just a waste of discussion energy -- which makes it all the more rich that you're criticising others for what you claim is precisely that (even though Kyle's comments were no such thing). – Emilio Pisanty Dec 24 '19 at 2:47
I don't think we should test closing with fewer votes. The outcome for thousands of users of Physics SE would be too sensitive to actions of an individual trigger-happy user. If somebody hates a question, (s)he can also flag it. If a moderator does not close the question after that, maybe the question should not be closed, in the first place.
• To be clear -- the test won't enable an individual to do anything they could or couldn't do before, so we won't see people able to single-handedly close questions unless they have a gold badge in that tag, which is how it works now. And the moderators don't typically act of flags asking for closure while the question is still in the review queue -- we prefer to let the community take action as much as possible. Not sure if that helps ease some of your concerns or not... – tpg2114 Dec 14 '19 at 16:02
• But the proposed test isn't one vote required to close. And there is a daily limit to how many close votes a user can use. Plus, you are forgetting about reopening questions. Your answer is based on speculation, which is what a test would be great for. If what you fear is actually true, then I'm sure it wouldn't be applied. I think your answer actually shows why a test would be useful. – BioPhysicist Dec 14 '19 at 16:05
• Also if you can vote to close a question then you can't also raise a close question flag – BioPhysicist Dec 14 '19 at 16:06
• @tpg2114 : I did not say that an individual user would be able to close a question if fewer votes are enough, I said the outcome would be more sensitive to actions of such a user. If elected moderators are reluctant to close flagged questions in a hurry, maybe we, thousands of other users, should also manifest such wisdom and be reluctant to have questions closed with just 3 votes:-) – akhmeteli Dec 14 '19 at 16:39
• @AaronStevens : And who will assess the results of the test and decide if this measure should be permanent? I am afraid that will be the same trigger-happy enthusiasts of express closing. – akhmeteli Dec 14 '19 at 16:47
• I'm sure @tpg2114 Can answer that better, but I would assume a meta post would be made after the trial showing the acquired data and allowing a final vote to be made. I don't understand where your fear of single users having all the power here is coming from. Really the only people with that power are mods, and they use their power very responsibly. – BioPhysicist Dec 14 '19 at 17:03
• @AaronStevens : Please look how it was done at Stack Overflow (see the link to the results and an in-depth analysis in the above question). Before the experiment, the initial vote to close lead to closing in 36% of the cases; during the experiment, the initial vote to close lead to closing in 55% of the cases. In the link, they call it "a resounding success". I call it dangerous sensitivity to an opinion of just one trigger-happy user. – akhmeteli Dec 14 '19 at 17:20
• If you can prove that one user caused the increase in percentage then I'll believe you – BioPhysicist Dec 14 '19 at 17:24
• @AaronStevens : I was not trying to say that it was one and the same user in all cases, I was saying the first vote to close became too influential. So for each closed question, it was just one user (not the same user for all questions) that pretty much determined the fate of the question. – akhmeteli Dec 14 '19 at 17:27
• So do you think that there are many questions currently posted that have 3-4 votes to close that have come from said trigger happy users? I think the fact that the close vote queue is usually fairly full shows there aren't many users like this. – BioPhysicist Dec 14 '19 at 17:31
• @AaronStevens : I cannot offer my own statistics, but, judging by the statistics from the link, most questions getting the first vote to close will be closed under the new rule. I think this is too much, you may think this is actually great:-) – akhmeteli Dec 14 '19 at 17:40
• @akhmeteli While close efficacy went up, so did reopen efficacy -- remember, the test will also change reopen requirements to 3 votes. So for questions that are wrongly closed, it should be much easier to undo that and reopen the question. Of course, we won't know how the balance changes until we test it, and that's why we do a 30 day test and evaluate before diving in full-time. – tpg2114 Dec 14 '19 at 17:44
• It seems to me that what you are suggesting - refer bad questions to a mod who can close it - is in fact worse than having a multivote system. – ZeroTheHero Dec 15 '19 at 18:29
• Well... We have to agree to disagree. I would much rather a question NOT be closed by a moderator. IMO there are not enough users consistently voting (or not) to close - witness the size of the VTC queue - to keep the threshold at 5. Ideally of the queue was systematically pared down, 5 would be better than 3 but alas this is just not happening. – ZeroTheHero Dec 15 '19 at 21:44
• First reopen votes are often edits to the question - that puts the question in review. If closed questions don't get edited, they're unlikely to get reopened. But, y'all actually have a decent track record of handling the reopen queue... during a 60 day period starting 90 days ago, 316 closed questions were edited, putting them in review, and 80% of those reviews were completed (9% reopened, 72% leave closed, 20% aged out). – Catija Dec 19 '19 at 23:12
|
2021-06-24 21:50:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3614976704120636, "perplexity": 1068.3320463055124}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488559139.95/warc/CC-MAIN-20210624202437-20210624232437-00589.warc.gz"}
|
https://socratic.org/questions/if-a-person-has-the-values-for-an-object-s-density-and-volume-what-value-can-be-
|
If a person has the values for an object's density and volume, what value can be calculated?
Oct 10, 2016
Clearly, you can assess the mass of the object.
Explanation:
$\text{Density, } \rho$ $=$ $\text{Mass"/"Volume}$. And for chemists, $\rho$ typically has the units $g \cdot m {L}^{-} 1$ or $g \cdot c {m}^{-} 3$.
Thus, if you had values for $\rho$ and $\text{volume}$, you could assess the mass of the object by taking the product:
$\text{Mass"="volume"xx"density}$.
Such a product gives units of grams as required, because $\cancel{m L} \times g \cdot \cancel{m {L}^{-} 1} = \text{grams}$.
|
2020-08-05 11:25:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7347375750541687, "perplexity": 531.7930620259702}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735939.26/warc/CC-MAIN-20200805094821-20200805124821-00335.warc.gz"}
|
http://nodus.ligo.caltech.edu:8080/40m/page328?&sort=Subject&attach=0
|
40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
40m Log, Page 328 of 346 Not logged in
ID Date Author Type Category Subject
6205 Tue Jan 17 03:10:27 2012 kiwamuConfigurationIOOrotated lambda/2 plate
I have slightly rotated the lambda/2 plate, which is used for attenuating the REFL beam's power on the AS table
because the plate had been at an unusual angle for investigation of the glitches since last Thursday.
It means the laser power going to the coating thermal noise setup has also changed. Just keep it in mind.
Quote from #6198 So today we set up the Jenny RC temperature setup to lock the LWE NPRO to the RC and then set up the beat note with the IFO REFL beam on the AS table. By using the 2 laser beat, we are avoiding the VCO phase noise issue which used to limit the PSL frequency noise at ~0.01 Hz/rHz. To do this we have reworked some of the optics on the PSL and AS tables, but I think its been done without disturbing the beams for the regular locking. Beat note has been found, but the NPRO has still not been locked to the RC - next we setup the lockin amp, dither the PZT, and then use the New Focus lock box to lock it to the RC.
7990 Mon Feb 4 10:45:51 2013 JamieSummaryGeneralrough analysis of aligned PRM-PR2 mode scan
Here's a sort of rough analysis of the aligned PRM-PR2 cavity mode scan.
On Friday we took some mode scan measurements of the PRM-PR2 cavity by pushing PRM (C1:SUS-PRM_LSC_EXT) with a 0.01 Hz, 300 count sine wave. We looked at the transmitted power on the POP DC PD and the error signal on REFL11_I.
Below is a detail of the scan, chosen because the actuation was in its linear region and there were three relatively ok looking transmission peaks with nice PDH response curves:
The vertical green lines on the bottom plot indicate the rough averaged separation of the 11 MHz side-band resonances from the carrier, at +- 0.0275 s. If we take this for our calibration, we get roughly 400 MHz / second.
The three peaks in top plot have an average FWHM of 0.00381 s. Given the calibration above, the average FWHM = ~1.52 MHz.
If we assume a cavity length of 1.91 m, FSR = 78.5 MHz.
Putting this together we get a finesse = ~51.6.
Analysis of misaligned mode scans to follow.
7991 Mon Feb 4 11:10:59 2013 KojiSummaryGeneralrough analysis of aligned PRM-PR2 mode scan
The expected finesse is 100ish. How much can we beleive the measured number of 50?
From the number we need to assume PR2 has ~93% reflectivity.
This does not agree with my feeling that the cavity is overcoupled.
Another way is to reduce the reflectivity of the PRM but that is also unlikely from the data sheet.
The scan passed the peak in 4ms according to the fitting.
How do the analog and digital antialiasing filters affect this number?
7994 Mon Feb 4 19:33:19 2013 yutaSummaryGeneralrough analysis of aligned PRM-PR2 mode scan
[Jenne, Yuta]
We redid PRM-PR2 cavity scan because last one (elog #7990) was taken with the sampling frequency of 2 KHz. We have also done TMS measurement.
Method:
1. Align input TTs and PRM to align PRM-PR2 cavity.
2. Sweep cavity length using C1:SUS-PRM_LSC_EXC.
3. Get data using Jamie's getdata and fitted peaks using /users/jrollins/modescan/prc-pr2_aligned/run.py
4. Calculated cavity parameters
Results:
Below is the figure containing peaks used to do the calculation.
From 11 MHz sidebands, calibration factor is 462 +/- 22 MHz/sec (supposing linear scan around peaks)
FWHM is 1.45 +/- 0.03 MHz.
TMS is 2.64 +/- 0.05 MHz.
Error bars are statistical errors of the average over 3 TEM00 peaks.
If we believe cavity length L to be 1.91 m, FSR is 78.5 MHz.
So, Finesse will be 54 +/- 1 and cavity g-factor will be 0.9944 +/- 0.0002. 0.9889 +/- 0.0004 (Edited by YM; see elog #8056)
If we believe RoC of PRM is exactly +122.1 m, measured g-factor insists RoC of PR2 to be -187 +/- 4.
If we believe RoC of PR2 is exactly -600 m, measured g-factor insists RoC of PRM to be 218 +/- 6.
Discussion:
1. Finesse is too small (expected to be ~100). This time, data was taken 16 KHz. Cut-off frequency of the digital antialiasing filter is ~ 5 kHz (see /opt/rtcds/rtscore/release/src/fe/controller.c). FWHM is about 0.003 sec, so it should not effect much according to my simulation.
2. I don't know why FWHM measurement from the last one is similar to this one. The last one was taken 2 KHz, this means anti-aliasing filter of 600 Hz. This should double FWHM.
3. Oscilloscope measurement may clear anti-aliasing suspicion.
7997 Tue Feb 5 02:04:44 2013 yutaSummaryGeneralrough analysis of aligned PRM-PR2 mode scan
I redid PRM-PR2 cavity scan using oscilloscope to avoid anti-aliasing effect.
Measured Finesse was 104 +/- 1.
Method:
1. Splitted POP DC output into three and plugged two into oscilloscope TDS 3034B. Ch1 and Ch2 was set to 1 V/div and 20 mV/div respectively to take the whole signal and higer resolution one at the same time (Koji's suggestion). Sampling frequency was 50 kHz. Sweeping time through FWHM was about 0.001 sec, which is slow enough.
2. Took mode scan data from the oscilloscope via network.
Preliminary results:
Below is the plot of the data for one TEM00 peak.
The data was taken twice.
Measured FWHM was 0.764 MHz and 0.751 MHz. By taking the average, FWHM = 0.757 +/- 0.005 MHz.
This gives you Finesse = 104 +/- 1, which is OK compared with the expectation.
What I need:
I need better oscilloscope so that we can take longer data (~1 sec) with higher resolution (~0.004 V/count, ~50kHz).
TDS 3034B can take data only for 10 ksamples, one channel by one! I prefer Yokogawa DL750 or later.
7998 Tue Feb 5 03:16:51 2013 KojiSummaryGeneralrough analysis of aligned PRM-PR2 mode scan
0.764 and 0.751 do not give us the stdev of 0.005.
I have never seen any Yokogawa in vicinity.
Quote: Measured FWHM was 0.764 MHz and 0.751 MHz. By taking the average, FWHM = 0.757 +/- 0.005 MHz. This gives you Finesse = 104 +/- 1, which is OK compared with the expectation. What I need I need better oscilloscope so that we can take longer data (~1 sec) with higher resolution (~0.004 V/count, ~50kHz). TDS 3034B can take data only for 10 ksamples, one channel by one! I prefer Yokogawa DL750 or later.
8000 Tue Feb 5 10:09:08 2013 yutaSummaryGeneralrough analysis of aligned PRM-PR2 mode scan
stdev of [0.764, 0.751] is 0.007, but what we need is the error of the averaged number. Statistical error of the averaged number is stdev/sqrt(n).
Quote: 0.764 and 0.751 do not give us the stdev of 0.005.
8002 Tue Feb 5 11:30:19 2013 KojiSummaryGeneralrough analysis of aligned PRM-PR2 mode scan
Makes sense. I mixed up n and n-1
Probability function: X = (x1 + x2 + ... + xn)/n, where xi = xavg +/- dx
Xavg = xavg*n/n = xavg
dXavg^2 = n*dx^2/n^2
=> dXavg = dx/sqrt(n)
Xavg +/- dXavg = xavg +/- dx/sqrt(n)
8074 Wed Feb 13 01:26:08 2013 yutaSummaryGeneralrough analysis of aligned PRM-PR2 mode scan
Koji was correct.
When you estimate the variance of the population, you have to use unbiased variance (not sample variance). So, the estimate to dx in the equations Koji wrote is
dx = sqrt(sum(xi-xavg)/(n-1))
= stdev*sqrt(n/(n-1))
It is interesting because when n=2, statistical error of the averaged value will be the same as the standard deviation.
dXavg = dx/sqrt(n) = stdev/sqrt(n-1)
In most cases, I think you don't need 10 % precision for statistical error estimation (you should better do correlation analysis if you want to go further). You can simply use dx = stdev if n is sufficiently large (n > 6 from plot below).
Quote: Makes sense. I mixed up n and n-1 Probability function: X = (x1 + x2 + ... + xn)/n, where xi = xavg +/- dx Xavg = xavg*n/n = xavg dXavg^2 = n*dx^2/n^2 => dXavg = dx/sqrt(n) Xavg +/- dXavg = xavg +/- dx/sqrt(n)
11857 Mon Dec 7 11:11:25 2015 yutaroSummaryLSCround trip loss of X arm
On the day before yesterday and in this morning, I measured loss map of ETMX. I reported the method I used to change the beam spot on ETMX below.
Round trip loss was measured for 5 x 5 points. The result is below.
(unit: ppm)
455.4 +/- 21.1 437.1 +/- 21.8 482.3 +/- 21.8 461.6 +/- 22.5 507.9 +/- 20.1
448.4 +/- 20.7 457.3 +/- 21.2 495.6 +/- 20.2 483.1 +/- 20.8 472.2 +/- 19.8
436.9 +/- 19.3 444.6 +/- 19.7 483.0 +/- 19.5 474.9 +/- 20.9 498.3 +/- 18.7
454.4 +/- 18.7 474.4 +/- 20.6 487.7 +/- 21.4 482.6 +/- 20.7 487.0 +/- 19.9
443.7 +/- 18.6 469.9 +/- 20.2 482.8 +/- 18.7 480.9 +/- 19.5 486.1 +/- 19.2
The correspondence between the loss shown above and the beam spot on ETMX is shown in the attached figure. In the figure, "up" and "right" indicate direction of shift of the beam spot when you watch it via the camera (ex. 455.4 ppm corresponds to the highest and rightest point in the view via the camera).
This result is consistent withe previous result of 561.19 +/- 14.57 ppm ericq got with ASDC and reported in elog 10248 if the discussion I reported in 11819 is taken into account. Elog 11819 says in short that the strange behavior of ASDC could give us 60-70 ppm error.
The reason why the error is larger than that of the measurement for ETMY is that the noise of POX is larger than that of POY. But I am not sure to what extent the statistical error needs to be reduced.
How I shifted the beam spot on ETMX:
Basically, the method was same as one used for Y arm. Different point is: for Y arm we have two steering mirrors TT1&2, but for X arm we have only one steering mirror BS. Then in order to shift incident beam so that the beam spot on ITMX does not change, I ran the dithering of X arm as well as that of Y arm and added offsets to both dither loops that caused same amount of shift on ETMX and ETMX. Thanks to the symmetry between X arm and Y arm, the dithering of Y arm ensured that the beam spot on ITMX was unchanged as well as that of ITMY. The idea of this method is schematically shown in Attachment 2.
The calibration of how much the beam spot shifted is based on the results of elog 11846 . The offset was [-15,-7.5,0,7.5,15]x[-5,-2.5,0,2.5,5] for pitch and yaw, respectively.
11810 Wed Nov 25 16:40:32 2015 yutaroUpdateLSCround trip loss of Y arm
I measured round trip loss of Y arm. The alignment of relevant mirrors was set ideal with dithering (no offset).
Summary:
round trip loss of Y arm: 166.2 +/- 9.3 ppm
(In the error, only statistic error is included.)
How I measured it:
I compared the power of light reflected by Y arm (measured at AS) when the arm was locked (P_L) and when ETMY was misaligned (P_M). P_L and P_M can be described as
$P_M=P_0(1-T_\mathrm{ITM})$
$P_L=P_0\left[1-(1-\alpha)\frac{4T_\mathrm{ITM}}{T_\mathrm{tot}^2}T_\mathrm{loss}\right]$.
The reason why P_L takes this form is: (1-alpha)*4T_ITM/(T_tot)^2 is intracavity power and then product of intracavity power and loss describes the power of light that is not reflected back. Here, alpha is power ratio of light that does not resonate in the arm (power of mismatched mode and modulated sideband), and T_tot is T_ITM+T_loss. Transmissivity of ETM is included in T_loss. I assumed alpha = 7%(mode mismatch) + 2 % (modulation) (elog 11745)
After some calculation we get
$1-P_L/P_M\simeq \frac{4(1-\alpha) T_\mathrm{loss}}{T_\mathrm{ITM}}-T_\mathrm{ITM}$.
Here, higher order terms of T_ITM and (T_loss/T_ITM) are ignored. Then we get
$(1-\alpha) T_\mathrm{loss}=\frac{T_\mathrm{ITM}}{4}(1-P_L/P_M+T_\mathrm{ITM})$.
Using this formula, I calculated T_loss. P_L and P_M were measured 100 times (each measurement consisted of 1.5 sec ave.) each and I took average of them. T_ETM =13.7 ppm is used.
Discussion:
-- This value is not so different from the value ericq reported in July (elog 10248).
-- This method of measuring arm loss is NOT sensitive to T_ITM. In contrast, the method in which loss is obtained from finesse (for example, elog 11740) is sensitive to T_ITM.
In the method I'm now reporting,
$\Delta T_\mathrm{loss}/T_\mathrm{loss}\simeq\Delta T_\mathrm{ITM}/T_\mathrm{ITM}$,
but in the method with finesse,
$\Delta T_\mathrm{loss}\simeq\Delta T_\mathrm{ITM}$.
In the latter case, if relative error of T_ITM is 10%, error of T_loss would be 1000 ppm.
So it would be better to use power of reflected light when you want to measure arm loss.
11816 Wed Nov 25 23:34:52 2015 yutaroUpdateLSCround trip loss of Y arm
[yutaro, Koji]
Due to the strange behavior (elog 11815) of ASDC level, we checked if it is possible to use POYDC instead of ASDC to measure the power of reflected light of YARM. Attached below is the spectrum of them when the arm is locked. This spectrum shows that it is not bad to use POYDC, in terms of noise. The spectrum of them when ETMY is misaligned looked similar.
So I am going to use POYDC instead of ASDC to measure arm loss of YARM.
Ed by KA:
The spectra of POYDC and ASDC were measured. We foudn that they have coherence at around 1Hz (good).
It told us that POYDC is about 1/50 smaller than ASDC. Therefore in the attached plot, POYDC x50 is shown.
That's the meaning of the vertical axis unit "ASDC".
11818 Fri Nov 27 03:38:23 2015 yutaroSummaryLSCround trip loss of Y arm
Tonight I measured "loss map" of ETMY. The method to calculate round trip loss is same as written in elog 11810, except that I used POYDC instead of ASDC this time.
How I changed beam spot on ETMY is: elog 11779.
I measured round trip loss for 5 x 5 points. The result is below.
(unit: ppm)
494.9 +/- 7.6 356.8 +/- 6.0 253.9 +/- 7.9 250.3 +/- 8.2 290.6 +/- 5.1
215.7 +/- 4.8 225.6 +/- 5.7 235.1 +/- 7.0 284.4 +/- 5.4 294.7 +/- 4.5
205.2 +/- 6.0 227.9 +/- 5.8 229.4 +/- 7.2 280.5 +/- 6.3 320.9 +/- 4.3
227.9 +/- 5.7 230.5 +/- 5.5 262.1 +/- 5.9 315.3 +/- 4.7 346.8 +/- 4.2
239.7 +/- 4.5 260.7 +/- 5.3 281.2 +/- 5.8 333.7 +/- 5.0 373.8 +/- 4.9
The correspondence between the loss shown above and the beam spot on ETMY is shown in the following figure. In the figure, "downward" and "left" indicate direction of shift of the beam spot when you watch it via the camera (ex. 494.9 ppm corresponds to the lowest and rightest point).
Edited below on 28th Nov.
To shift the beam spot on ETMY, I added offset in YARM dither loop. The offset was [-30,-15,0,15,30]x[-10,-5,0,5,10] for pitch and yaw, respectively. How I calibrated the beam spot is basically based on elog 11779, but I multiplied 5.3922 for vertical direction and 4.6205 for horizontal direction which I had obtained by caliblation of oplev (elog 11785).
Edited above on 28 th Nov.
I will report the detail later.
11819 Fri Nov 27 22:20:24 2015 yutaroUpdateLSCround trip loss of Y arm
Here, I upload data I took last night, including the power of reflected power (locked/misaligned) and transmitted power for each point (attachement 1).
And I would like to write about possible reason why the loss I measured with POYDC and the loss I measured with ASDC are different by about 60 - 70 ppm (elog 11810 and 11818). The conclusion I have reached is:
It could be due to the strange bahavior of ASDC level.
This difference corresponds to the error of ~2% in the value of P_L/P_M. As reported in elog 11815, ASDC level changes when angle of the light reflected by ITMY changes, and 2% change of ASDC level corresponds to 10 urad change of the angle of the light according to my rough estimation with the figure shown in elog 11815 and attachment 2. This means that 2% error in P_L/P_M could occur if the angle of the light incident to YARM and that of resonant light in YARM differ by 10 urad. Since the waist width $w_0$ of the beam is ~3 mm, with the 10 urad difference, the ratio of the power of TEM10 mode is $(10\,\mu \mathrm{rad}/ \theta_0)^2\sim0.01$, where $\theta_0=\lambda/\pi w_0$. This value is reasonable; in elog 11743 Gautam reported that the ratio of the power of TEM10 was ~ 0.03, from the result of cavity scan. Therefore it is possible that the angle of the light incident to YARM and that of resonant light in YARM differ by 10 urad and this difference causes the error of ~2% in P_L/P_M, which could exlain the 60 - 70 ppm difference.
375 Thu Mar 13 12:11:58 2008 aivanovUpdateComputer Scripts / Programsrouting PEM -> ASS -> SUS_MCL
on ASS RFM 1 has PEM signals at
float at 0x100000 has c0dcu1 first ICS110B chan 1
float at 0x100004 has chan 2
etc.
ASS sends to RFM 0
float at 0x100000 goes to PRM MCL
0x100004 to BS MCL
0x100008 to IMTX MCL
0x10000c to ITMY MCL
0x100010 to SRM MCL
0x100018 to MC1 MCL
0x10001c to MC3 MCL
0x100020 to ETMX MCL
0x100024 to ETMY MCL
380 Fri Mar 14 15:06:24 2008 robUpdateComputer Scripts / Programsrouting PEM -> ASS -> SUS_MCL
Quote: on ASS RFM 1 has PEM signals at float at 0x100000 has c0dcu1 first ICS110B chan 1 float at 0x100004 has chan 2 etc. ASS sends to RFM 0 float at 0x100000 goes to PRM MCL 0x100004 to BS MCL 0x100008 to IMTX MCL 0x10000c to ITMY MCL 0x100010 to SRM MCL 0x100018 to MC1 MCL 0x10001c to MC3 MCL 0x100020 to ETMX MCL 0x100024 to ETMY MCL
You can differentiate between RFM 0 and RFM 1 in the simulink model by adding 0x4000000 to the offsets for RFM 1.
11161 Mon Mar 23 19:30:36 2015 ranaUpdateComputer Scripts / Programsrsync frames to LDAS cluster
The rsync job to sync our frames over to the cluster has been on a 20 MB/s BW limit for awhile now.
Dan Kozak has now set up a cronjob to do this at 10 min after the hour, every hour. Let's see how this goes.
You can find the script and its logfile name by doing 'crontab -l' on nodus.
11288 Wed May 13 09:17:28 2015 ranaUpdateComputer Scripts / Programsrsync frames to LDAS cluster
Still seems to be running without causing FB issues. One thought is that we could look through the FB status channel trends and see if there is some excess of FB problems at 10 min after the hour to see if its causing problems.
I also looked into our minute trend situation. Looks like the files are comrpessed and have checksum enabled. The size changes sometimes, but its roughly 35 MB per hour. So 840 MB per day.
According to the wiper.pl script, its trying to keep the minute-trend directory to below some fixed fraction of the total /frames disk. The comment in the scripts says 0.005%,
but I'm dubious since that's only 13TB*5e-5 = 600 MB, and that would only keep us for a day. Maybe the comment should read 0.5% instead...
Quote: The rsync job to sync our frames over to the cluster has been on a 20 MB/s BW limit for awhile now. Dan Kozak has now set up a cronjob to do this at 10 min after the hour, every hour. Let's see how this goes. You can find the script and its logfile name by doing 'crontab -l' on nodus.
11299 Mon May 18 14:22:05 2015 ericqUpdateComputer Scripts / Programsrsync frames to LDAS cluster
Quote: Still seems to be running without causing FB issues.
I'm not so sure. I just was experiencing some severe network latency / EPICS channel freezes that was alleviated by killing the rsync job on nodus. It started a few minutes after ten past the hour, when the rysnc job started.
Unrelated to this, for some odd reason, there is some weirdness going on with ssh'ing to martian machines from the control room computers. I.e. on pianosa, ssh nodus fails with a failure to resolve hostaname message, but ssh nodus.martian succeeds.
4247 Thu Feb 3 17:25:03 2011 josephbUpdateComputersrsync script was not really backing up /cvs/cds
So today, after an "rm" error while working with the autoburt.pl script and burt restores in general, I asked Dan Kozak how to actually look at the backup data. He said there's no way to actually look at it at the moment. You can reverse the rsync command or ask him to grab the data/file if you know what you want. However, in the course of this, we realized there was no /cvs/cds data backup.
Turns out, the rsync command line in the script had a "-n" option. This means do a dry run. Everything *but* the actual final copying.
I have removed the -n from the script and started it on nodus, so we're backing up as of 5:22pm today.
I'm thinking we should have a better way of viewing the backup data, so I may ask Dan and Stewart about a better setup where we can login and actually look at the backed up files.
In addition, tomorrow I'm planning to add cron jobs which will put changes to files in the /chans and /scripts directories into the SVN on a daily basis, since the backup procedure doesn't really provide a history for those, just a 1 day back backup.
6807 Tue Jun 12 17:46:09 2012 JenneUpdateComputersrtcds: command found
Quote:
Quote: We can't compile any changes to the LSC or the GCV models since Jamie's new script / program isn't found. I don't know where it is (I can't find it either), so I can't do the compiling by hand, or point explicitly to the script. The old way of compiling models in the wiki is obsolete, and didn't work :(
Sorry about that. I had modified the path environment that pointed to the rtcds util. The rtcds util is now in /opt/rtcds/caltech/c1/scripts/rtcds, which is in the path. Starting a new shell should make it available again.
Added TRX and TRY and POY11_I_ERR and POX11_I_ERR to the c1lsc.mdl using a new-style DAQ Channels block, recompiled, installed, started the model, all good. Restarted the daqd on the framebuilder, and everything is green. I can go back and get recorded data using dataviewer (for the last few minutes since I started fb), so it all looks good.
Note on the new DAQ Channels block: Put the text block (from CDS_PARTS) at the same level as the channel you want to save, and name it exactly as it is in the model. The code-generator will add the _DQ for you. i.e. if you define a channel "TRY_OUT_DQ" in the lsc model, you'll end up with a channel "C1:LSC-TRY_OUT_DQ_DQ".
We can't compile any changes to the LSC or the GCV models since Jamie's new script / program isn't found. I don't know where it is (I can't find it either), so I can't do the compiling by hand, or point explicitly to the script. The old way of compiling models in the wiki is obsolete, and didn't work :(
This means we can't (a) record TRY or (b) add the Q quadrature of the beat PD to the real time system tonight.
We're going to try just using Yuta's pynds script to capture data in real time, so we can keep working for tonight.
Quote: We can't compile any changes to the LSC or the GCV models since Jamie's new script / program isn't found. I don't know where it is (I can't find it either), so I can't do the compiling by hand, or point explicitly to the script. The old way of compiling models in the wiki is obsolete, and didn't work :(
Sorry about that. I had modified the path environment that pointed to the rtcds util. The rtcds util is now in /opt/rtcds/caltech/c1/scripts/rtcds, which is in the path. Starting a new shell should make it available again.
6270 Fri Feb 10 15:46:59 2012 steveUpdateSUSruby wire standoff
Finally I found a company who can do Koji's improved -hard to make- specification on ruby or sapphire wire standoff.
NOT POLISHED excimer laser cut, wire groove radius R 0.0005" + - 0.0002"
\$250 ea at 50 pieces of order
13039 Mon Jun 5 10:30:45 2017 SteveUpdateSUSruby wire standoff pictures
Atm 1 & 5, showing the ruby R ~10 mm as it is seated on Al SOS test mass
Atm. 2, 3 & 4 chipped long edges with SOS sus wire OD 43 micron as calibration
Quote: Ruby wire standoff received from China. I looked one of them with our small USB camera. They did a good job. The long edges of the prism are chipped. The v-groove cutter must avoid them. Pictures will follow.
13123 Mon Jul 17 16:22:01 2017 SteveUpdateSUSruby wire standoff pictures
Bluebean Optical Tech Limited of Shanghai delivered 50 pieces red ruby prisms with radius. The first prism pictures were taken at June 5
and it was retaken again as BB#1 later
More samples were selected randomly as one from each bag of 5 and labeled as BB#2.......6
The R10 mm radius can be seen agains the ruler edge. The v-groove edge was labeled with blue marker and pictures were taken
from both side of this ridge. The top view is shown as the wire laying across on it.
SOS sus wire of 43 micron OD used as calibration as it was placed close to the side that it was focused on.
The V-groove ridge surface quality was evaluated based on as scale of 1 – 10 with 10 being the most positive.
BB# Edge quality score 1 4 2 8 3 3 4 9.5 5 2 6 9
Remaining thing to examin, take picture of the contacting ridge to SOS from the side.
12117 Sun May 15 19:48:08 2016 SteveUpdateVACrun out of N2
3-4 hrs ago we run out of nitrogen. We are back to Vacuum Normal
13903 Thu May 31 15:48:16 2018 KiraUpdatePEMrunning PID script with seismometer
I have attached the result of running the PID script on the seismometer with the can on. The daily fluctuations are no more than 0.07 degrees off from the setpoint of 39 degrees. Not really sure what happened in the past day to cause the strange behavior. It seems to have returned back to normal today.
280 Mon Jan 28 15:11:38 2008 robHowToDMFrunning compiled matlab DMF tools
I compiled Rana's seisBLRMS monitor, and it's now running on mafalda. To start your own DMF tools, here is a procedure:
1) build your tool in mDV, get it working the way you'd like.
2) Make a new directory /cvs/cds/caltech/apps/DMF/compiled_matlab/{your_new_directory} and copy the *.m file there.
3) Make the *.m in your new directory into a function with no args (just add a function line at the top)
4) compile it (from within a fully mDV-configured matlab) with mcc -m -R -nojvm {yourfile}.m at the matlab command line.
5) add a line corresponding to your new tool to the script /cvs/cds/caltech/apps/DMF/scripts/start_all
6) Run the start_all script referenced in part (5).
NB: Steps (4) and (6) must be carried out on mafalda.
13978 Mon Jun 18 10:34:45 2018 johannesUpdateComputer Scripts / Programsrunning comsol job on optimus
I'm running a comsol job on optimus in a tmux session named cryocavs. Should be done in less than 24 hours, judging by past durations.
11410 Tue Jul 14 13:55:28 2015 jamieUpdateCDSrunning test on daqd, please leave undisturbed
## I'm running a test with daqd right now, so please do not disturb for the moment.
I'm temporarily writing frames into a tempfs, which is a filesystem that exists purely in memory. There should be ZERO IO contention for this filesystem, so if the daqd failures are due to IO then all problems should disappear. If they don't, then we're dealing with some other problem.
There will be no data saved during this period.
11413 Tue Jul 14 17:06:00 2015 jamieUpdateCDSrunning test on daqd, please leave undisturbed
I have reverted daqd to the previous configuration, so that it's writing frames to disk. It's still showing instability.
6201 Sun Jan 15 12:18:00 2012 DenUpdateAdaptive Filteringrunning time
In order to figure out what downsampling ratio we can take, we need to determine the running time of the fxlms_filter() function. If the filter length is equal to 5000, downsampling ratio is equal to 1, number of witness channels is 1 then with ordinary compilation without speed optimization one call runs for 0.054 ms (milli seconds). The test was done on the 3 GHz Intel processor. With speed optimization flags the situation is better
-O1 0.014 ms, -O3 0.012 ms
However, Alex said that speed optimization is not supported at RCG because it produce unstable execution time for some reason. However, by default the kernel should optimize for size -Os. With this flag the running time is also 0.012 ms. We should check if the front-end machine compilers indeed use -Os flag during the compilation and also play with speed optimization flags. Flags -O3 and -Os together might also give some speed improvement.
But for now we have time value = 0.012 ms as running time for 5000 coefficient filter, 1 witness channel and downsample ratio = 1. Now, we need to check how this time is scaled if we change the parameters.
5000 cofficients - 0.012 ms
10000 coefficients - 0.024 ms
15000 coefficients - 0.036 ms
20000 coefficients - 0.048 ms
We can see that filter length scaling is linear. Now we check downsampling ratio
ratio=1 - 0.048 ms
ratio=2 - 0.024 ms
ratio=4 - 0.012 ms
Running time on the dependance of downsample ratio is also linear as well as on the dependence of the number of witness channels and degrees of freedom.
If we want to filter 8 DOF with approximately 10 witness channels for each DOF, then 5000 length filter will make 1 cycle for ~1 ms, that is good enough to make the downsample ratio equal to 4.
Things get a little bit complicated when the code is called for the first time. Some time is taken to initialize variables and check input data. As a result the running time of the first cycle is ~0.1 ms for 1 DOF that is ~10 times more then running time of an ordinary cycle. This effect takes place at this moment when one presses reset button in the c1oaf model - the filter becomes suspended for a while. To solve this problem the initialization should be divided by several (~10) parts.
12608 Wed Nov 9 11:40:44 2016 ericqUpdateCDSsafe.snap BURT files now in svn
This is long overdue, but our burt files for SDF now live in the LIGO userapps SVN, as they should.
The canonical files live in places like /opt/rtcds/userapps/release/cds/c1/burtfiles/c1x01_safe.snap and are symlinked to files like /opt/rtcds/caltech/c1/target/c1x01/c1x01epics/burt/safe.snap
2647 Mon Mar 1 08:49:37 2010 steveBureaucracySAFETYsafety audit
The 40m safety audit will be at Wednesday afternoon, March 3
6329 Tue Feb 28 11:20:51 2012 steveUpdateSAFETYsafety audit 2012
Correction list by visiting safety committee, Haick Issaian is not shown:
1, update laser, crane operator list and post it
2, check fire extinguishers monthly, date and initials must be on the tags
3, move drinking water tower so it does not block fire extinguisher
4, post updated crane doc at cranes
5, post present phone lists at IFO room phones
6, emergency laser shutoff at the south end must be mounted without C-clamp
7, use heavy cable tie to insure position of mag-fan on cabinet top
a, safety glasses to be cleaned
b, let the electrical shop fix Rack-AC power to optical tables at the ends
c, measure transmission of laser safety glasses
d, update IFO outside door info signs
e, update laser inventory and post it
f, schedule annual crane inspection and renew maintenance contract
g, PSL enclosure inner shelf needs a good clean up so it is earthquake safe
6470 Fri Mar 30 09:37:13 2012 steveUpdateSAFETYsafety audit 2012 CORRECTIONS
Quote: Correction list by visiting safety committee, Haick Issaian is not shown: 1, update laser, crane operator list and post it 2, check fire extinguishers monthly, date and initials must be on the tags 3, move drinking water tower so it does not block fire extinguisher 4, post updated crane doc at cranes 5, post present phone lists at IFO room phones 6, emergency laser shutoff at the south end must be mounted without C-clamp 7, use heavy cable tie to insure position of mag-fan on cabinet top Additional to do list: a, safety glasses to be cleaned b, let the electrical shop fix Rack-AC power to optical tables at the ends c, measure transmission of laser safety glasses d, update IFO outside door info signs e, update laser inventory and post it f, schedule annual crane inspection and renew maintenance contract g, PSL enclosure inner shelf needs a good clean up so it is earthquake safe
Completed with the exception of d and g
8240 Wed Mar 6 11:33:09 2013 steveUpdateSAFETYsafety audit 2013
Recommended correction list:
1, refill- upgrade first aid boxes
2, maintain 18" ceiling to bookshelf clearance so the ceiling fire sprinklers are not blocked: room 101
3, label chilled water supply & return valves in IFO room
4, calibrate bake room hoods annually
5, update safety sign at fenced storage
40m still to do list:
1, clean and measure all safety glasses
2, annual crane inspection is scheduled for 8am March 19, 1013
3, make PSL encloser shelf earthquake proof
Do you see something that is not safe? Add it to this list please.
9672 Tue Feb 25 16:54:57 2014 steveUpdatesafetysafety audit 2014
We had our annual safety inspection today. Our SOPs are outdated. The full list of needed correction will be posted tomorrow.
The most useful found was that the ITMX-ISCT ac power is coming from 1Y1 rack. This should actually go to 1Y2 LSC rack ?
Please test this so we do not create more ground loops.
9725 Thu Mar 13 16:05:48 2014 steveUpdatesafetysafety audit 2014
Quote: We had our annual safety inspection today. Our SOPs are outdated. The full list of needed correction will be posted tomorrow. The most useful found was that the ITMX-ISCT ac power is coming from 1Y1 rack. This should actually go to 1Y2 LSC rack ? Please test this so we do not create more ground loops.
Annual crane inspection is scheduled for 8-11am Monday, March 17, 2014
The control room Smart UPS has two red extension cords that has to be removed: Nodus and Linux1
9870 Mon Apr 28 17:27:29 2014 steveUpdatesafetysafety audit 2014
Quote:
Quote: We had our annual safety inspection today. Our SOPs are outdated. The full list of needed correction will be posted tomorrow. The most useful found was that the ITMX-ISCT ac power is coming from 1Y1 rack. This should actually go to 1Y2 LSC rack ? Please test this so we do not create more ground loops.
Annual crane inspection is scheduled for 8-11am Monday, March 17, 2014
The control room Smart UPS has two red extension cords that has to be removed: Nodus and Linux1
Last long extension cord removed from 1Y1 to ITMX-ISCT
The AC power strip at ITMX-ISCT is coming from wall L#26
9873 Tue Apr 29 00:11:03 2014 steveUpdatesafetysafety audit 2014
Be aware that this may affect POP QPD and POP RF Thorlabs PD
Quote: Last long extension cord removed from 1Y1 to ITMX-ISCT The AC power strip at ITMX-ISCT is coming from wall L#26
9900 Fri May 2 08:15:54 2014 steveUpdatesafetysafety audit 2014
Late adition: CHECK all viewport covers.
A, transparent Lexan sheet is protecting glass windows in a horizontal position
B, metal housing protection is required on each viewport except signal ports
C, signal ports should be shielded by optical table enclosure
We have to cover this window-camera with implosion proof cover or just remove it and blank it.
Question number 2: Do our vertically positioned windows with flip able covers require protective lexan ? NO 5-5-2014
11068 Wed Feb 25 14:35:32 2015 SteveUpdatesafetysafety audit 2015
Safety audit went soothly. We thank all participients.
Correction list:
1, Bathroom water heater cable to be stress releived and connector replaced by twister lock type.
2, Floor cable bridge at the vacuum rack to be replaced. It is cracked.
3, Sprinkler head to be moved eastward 2 ft in room 101
4, Annual crane inspection is scheduled for 8am Marc 3, 2015
5, Annual safety glasses cleaning and transmission measurement will get done tomorrow morning.
11085 Fri Feb 27 14:17:51 2015 SteveUpdatesafetysafety audit 2015
Safety glasses were measured and they are all good. I'd like to measure your personal glass if it is not on this picture.
Quote: Safety audit went soothly. We thank all participients. Correction list: 1, Bathroom water heater cable to be stress releived and connector replaced by twister lock type. 2, Floor cable bridge at the vacuum rack to be replaced. It is cracked. 3, Sprinkler head to be moved eastward 2 ft in room 101 4, Annual crane inspection is scheduled for 8am Marc 3, 2015 5, Annual safety glasses cleaning and transmission measurement will get done tomorrow morning.
12008 Wed Feb 24 10:27:23 2016 SteveUpdatesafetysafety audit 2016
Safety audit went smothly.
Crane inspection is scheduled for March 4
Safety glasses will be measured before April 1
12018 Thu Mar 3 10:19:20 2016 SteveUpdatesafetysafety audit 2016
Bob cleaned the safety glasses. They were sonicated in warm 2% Liquinox water for 10 minutes. Steve checked them by transmission measurement of 1064 nm at 150 mW
9681 Thu Feb 27 13:11:13 2014 steveUpdatesafetysafety audit correction
Quote: We had our annual safety inspection today. Our SOPs are outdated. The full list of needed correction will be posted tomorrow. The most useful found was that the ITMX-ISCT ac power is coming from 1Y1 rack. This should actually go to 1Y2 LSC rack ? Please test this so we do not create more ground loops.
Linus-1, Nodus and others ac cords can be moved over to new blank yellow extension cord with multiple recepticals.
Remove two red extension cords going to Smart UPS
6322 Mon Feb 27 10:21:37 2012 steveUpdateSAFETYsafety audit tomorrow morning
Quote: Emergency exit lights were inspected: 2 out of 13 batteries have to be replaced One of the Halon fire extinguishers needs to be recharged out of 8 Please do participate in preparation for the upcoming safety audit on Feb 28
Batteries replaced and cylinder recharged. Please clean up your experimental set up if it is blocking breakers or entry way etc.
I will start the final clean up 2pm today.
6308 Thu Feb 23 09:09:33 2012 steveUpdateSAFETYsafety checks
Emergency exit lights were inspected: 2 out of 13 batteries have to be replaced
One of the Halon fire extinguishers needs to be recharged out of 8
Please do participate in preparation for the upcoming safety audit on Feb 28
ELOG V3.1.3-
|
2023-02-01 05:33:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43796709179878235, "perplexity": 7853.68546663508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499911.86/warc/CC-MAIN-20230201045500-20230201075500-00468.warc.gz"}
|
http://mathhelpforum.com/calculus/85303-slope-fields-horizontal-segments.html
|
# Math Help - Slope Fields and horizontal segments
1. ## Slope Fields and horizontal segments
I am not really sure how to start this problem:
The slope field of the differential equation is dy/dx=(x^2y+y^2x)/3x+y will have horizontal segments when...
How do I find the horizontal segments without a calculator. I think I need to integrate, but I'm not sure.
2. Originally Posted by Moderatelyconfused
I am not really sure how to start this problem:
The slope field of the differential equation is dy/dx=(x^2y+y^2x)/3x+y will have horizontal segments when...
How do I find the horizontal segments without a calculator. I think I need to integrate, but I'm not sure.
Do you mean this:
$\frac{dy}{dx} = \frac{x^2 y+y^2 x}{3x}+y$
or this:
$\frac{dy}{dx} = \frac{x^{2y}+y^{2x}}{3x}+y$
?
3. I meant: dy/dx=(x^2y+y^2x)/(3x+y)
I apologize, the y was actually part of the denominator.
4. Originally Posted by Moderatelyconfused
I am not really sure how to start this problem:
The slope field of the differential equation is dy/dx=(x^2y+y^2x)/3x+y will have horizontal segments when...
How do I find the horizontal segments without a calculator. I think I need to integrate, but I'm not sure.
dy/dx = 0 when the numerator is equal to zero (but check that the denominator doesn't also equal zero when this happens).
|
2015-05-30 06:44:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9019616842269897, "perplexity": 452.7784935771794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930895.96/warc/CC-MAIN-20150521113210-00025-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-a-combined-approach-4th-edition/chapter-5-test-page-409/28
|
## Algebra: A Combined Approach (4th Edition)
$A=(4x^{2}-9)$ square inches
The top of the table is a rectangle, so its area $(A)$ will be $A= L \times W,$ where $L=(2x+3)$ inches and $W=(2x-3)$ inches. Therefore, $A=(2x+3)(2x-3)$ $A=2x(2x-3)+3(2x-3)$ $A=4x^{2}-6x+6x-9$ $A=4x^{2}-9$ Therefore, the area of the table is $A=(4x^{2}-9)$ square inches.
|
2018-10-18 12:40:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.726515531539917, "perplexity": 242.64612079675817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511806.8/warc/CC-MAIN-20181018105742-20181018131242-00364.warc.gz"}
|
https://paradigms.oregonstate.edu/activity/887/
|
## Activity: Central Forces Introduction: Lecture Notes
Central Forces 2022
In this course, we will examine a mathematically tractable and physically useful problem - that of two bodies interacting with each other through a central force, i.e. a force that has two characteristics:
Definition of a Central Force:
1. A central force depends only on the separation distance between the two bodies,
2. A central force points along the line connecting the two bodies.
The most common examples of this type of force are those that have $\frac{1}{r^2}$ behavior, specifically the Newtonian gravitational force between two point (or spherically symmetric) masses and the Coulomb force between two point (or spherically symmetric) electric charges. Clearly both of these examples are idealizations - neither ideal point masses or charges nor perfectly spherically symmetric mass or charge distributions exist in nature, except perhaps for elementary particles such as electrons. However, deviations from ideal behavior are often small and can be neglected to within a reasonable approximation. (Power series to the rescue!) Also, notice the difference in length scale: the archetypal gravitational example is planetary motion - at astronomical length scales; the archetypal Coulomb example is the hydrogen atom - at atomic length scales.
The two solutions to the central force problem - classical behavior exemplified by the gravitational interaction and quantum behavior exemplified by the Coulomb interaction - are quite different from each other. By studying these two cases together in the same course, we will be able to explore the strong similarities and the important differences between classical and quantum physics.
Two of the unifying themes of this topic are the conservation laws:
• Conservation of Energy
• Conservation of Angular Momentum
The classical and quantum systems we will explore both have versions of these conservation laws, but they come up in the mathematical formalisms in different ways. You should have covered energy and angular momentum in your introductory physics course, at least in simple, classical mechanics cases. Now is a great time to review the definitions of energy and angular momentum, how they enter into dynamical equations (Newton's laws and kinetic energy, for example), and the conservation laws.
In the classical mechanics case, we will obtain the equations of motion in three equivalent ways,
• using Newton's second law,
• using Lagrangian mechanics,
• using energy conservation.
so that you will be able to compare and contrast the methods. The third approach is slightly more sophisticated in that it exploits more of the symmetries from the beginning.
We will also consider forces that depend on the distance between the two bodies in ways other than $\frac{1}{r^2}$ and explore the kinds of motion they produce.
• group Box Sliding Down Frictionless Wedge
group Small Group Activity
120 min.
##### Box Sliding Down Frictionless Wedge
Theoretical Mechanics 2021 (2 years)
Students solve for the equations of motion of a box sliding down (frictionlessly) a wedge, which itself slides on a horizontal surface, in order to answer the question "how much time does it take for the box to slide a distance $d$ down the wedge?". This activities highlights finding kinetic energies when the coordinate system is not orthonormal and checking special cases, functional behavior, and dimensions.
• assignment_ind Magnetic Moment \& Stern-Gerlach Experiments
assignment_ind Small White Board Question
30 min.
##### Magnetic Moment & Stern-Gerlach Experiments
Quantum Fundamentals 2022 (2 years)
Students consider the relation (1) between the angular momentum and magnetic moment for a current loop and (2) the force on a magnetic moment in an inhomogeneous magnetic field. Students make a (classical) prediction of the outcome of a Stern-Gerlach experiment.
• group Mass is not Conserved
group Small Group Activity
30 min.
##### Mass is not Conserved
Theoretical Mechanics 2021 (2 years)
Groups are asked to analyze the following standard problem:
Two identical lumps of clay of (rest) mass m collide head on, with each moving at 3/5 the speed of light. What is the mass of the resulting lump of clay?
• face Ideal Gas
face Lecture
120 min.
##### Ideal Gas
Thermal and Statistical Physics 2020
These notes from week 6 of Thermal and Statistical Physics cover the ideal gas from a grand canonical standpoint starting with the solutions to a particle in a three-dimensional box. They include a number of small group activities.
• computer Effective Potentials
computer Mathematica Activity
30 min.
##### Effective Potentials
Central Forces 2022 (2 years) Students use a pre-written Mathematica notebook or a Geogebra applet to explore how the shape of the effective potential function changes as the various parameters (angular momentum, force constant, reduced mass) are varied.
• assignment Centrifuge
assignment Homework
##### Centrifuge
Centrifugal potential Thermal and Statistical Physics 2020 A circular cylinder of radius $R$ rotates about the long axis with angular velocity $\omega$. The cylinder contains an ideal gas of atoms of mass $M$ at temperature $T$. Find an expression for the dependence of the concentration $n(r)$ on the radial distance $r$ from the axis, in terms of $n(0)$ on the axis. Take $\mu$ as for an ideal gas.
• group Survivor Outer Space: A kinesthetic approach to (re)viewing center-of-mass
group Small Group Activity
10 min.
##### Survivor Outer Space: A kinesthetic approach to (re)viewing center-of-mass
Central Forces 2022 (2 years) A group of students, tethered together, are floating freely in outer space. Their task is to devise a method to reach a food cache some distance from their group.
• groups Air Hockey
groups Whole Class Activity
10 min.
##### Air Hockey
Central Forces 2022 (2 years)
Students observe the motion of a puck tethered to the center of the airtable. Then they plot the potential energy for the puck on their small whiteboards. A class discussion follows based on what students have written on their whiteboards.
• face Equipartition theorem
face Lecture
30 min.
##### Equipartition theorem
Contemporary Challenges 2022 (3 years)
This lecture introduces the equipartition theorem.
• face Thermal radiation and Planck distribution
face Lecture
120 min.
##### Thermal radiation and Planck distribution
Thermal and Statistical Physics 2020
These notes from the fourth week of Thermal and Statistical Physics cover blackbody radiation and the Planck distribution. They include a number of small group activities.
Learning Outcomes
|
2022-08-16 14:18:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5861424803733826, "perplexity": 1001.8716777667768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572304.13/warc/CC-MAIN-20220816120802-20220816150802-00264.warc.gz"}
|
https://www.projecteuclid.org/euclid.jdg/1175266181
|
## Journal of Differential Geometry
### $\rm{SL}_2$-orbits and degenerations of mixed Hodge structure
Gregory Pearlstein
#### Abstract
We extend Schmid’s $\rm{SL}_2$-orbit theorem to a class of variations of mixed Hodge structure which normal functions, logarithmic deformations, degenerations of 1-motives and archimedean heights. In particular, as a consequence of this theorem, we obtain a simple formula for the asymptotic behavior of the archimedean height of a flat family of algebraic cycles which depends only on the weight filtration and local monodromy.
#### Article information
Source
J. Differential Geom., Volume 74, Number 1 (2006), 1-67.
Dates
First available in Project Euclid: 30 March 2007
https://projecteuclid.org/euclid.jdg/1175266181
Digital Object Identifier
doi:10.4310/jdg/1175266181
Mathematical Reviews number (MathSciNet)
MR2260287
Zentralblatt MATH identifier
1107.14010
Subjects
Primary: 32Gxx: Deformations of analytic structures
Secondary: 14Dxx: Families, fibrations
#### Citation
Pearlstein, Gregory. $\rm{SL}_2$-orbits and degenerations of mixed Hodge structure. J. Differential Geom. 74 (2006), no. 1, 1--67. doi:10.4310/jdg/1175266181. https://projecteuclid.org/euclid.jdg/1175266181
|
2019-12-10 22:15:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5059215426445007, "perplexity": 2162.3517141314014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529006.88/warc/CC-MAIN-20191210205200-20191210233200-00367.warc.gz"}
|
https://groups.google.com/g/sage-devel/c/EfLYpAxl_jU/m/ShlCS9L4BgAJ
|
# Problem with wedge of unnamed diff forms on non-parall. mfds
63 views
### Michi
Mar 30, 2019, 10:21:11 AM3/30/19
to sage-devel
Hey folks,
I'm pretty new to this, so please be patient.
During my developement of an implementation of characteristic classes of the tangent bdl of manifolds, I've encountered a problem when wedging unnamed differential forms on non-parallelizable manifolds. Let's have an example:
sage: S2 = Manifold(2, name='S2', latex_name=r'S^2', start_index=1)sage: U = S2.open_subset(name='U', latex_name=r'S^2 \setminus \{\text{South pole}\}')sage: V = S2.open_subset(name='V', latex_name=r'S^2 \setminus \{\text{North pole}\}')sage: S2.declare_union(U,V)sage: c_xy.<t,z> = U.chart()sage: c_uv.<u,v> = V.chart()sage: xy_to_uv = c_xy.transition_map(c_uv, (x/(x^2+y^2), y/(x^2+y^2)), intersection_name='W', restrictions1= x^2+y^2!=0, restrictions2= u^2+v^2!=0)sage: uv_to_xy = xy_to_uv.inverse()sage: e_tz = c_tz.frame()sage: e_uv = c_uv.frame(); print(e_uv)sage: omega = S2.diff_form(1, name='omega', latex_name=r'\omega')sage: unnamed = S2.diff_form(1)sage: omega[e_xy,:] = -x^2, y^2; show(omega.disp(e_tz))sage: omega.add_comp_by_continuation(e_uv, V.intersection(U), c_uv)sage: unnamed[e_xy,:] = -x^2, y^2; show(omega.disp(e_tz))sage: unnamed.add_comp_by_continuation(e_uv, V.intersection(U), c_uv)sage: unnamed.wedge(omega)
---------------------------------------------------------------------------
UnboundLocalError Traceback (most recent call last)
----> 1 unnamed.wedge(omega)
/opt/sagemath-8.6/local/lib/python2.7/site-packages/sage/manifolds/differentiable/diff_form.pyc in wedge(self, other)
520 vmodule = dom_resu.vector_field_module(dest_map=dest_map_resu)
521 resu_degree = self._tensor_rank + other._tensor_rank
--> 522 resu = vmodule.alternating_form(resu_degree, name=resu_name,
523 latex_name=resu_latex_name)
524 for dom in self_r._restrictions:
UnboundLocalError: local variable 'resu_name' referenced before assignmentUnfortunately, I'm not a professional in python, but I guess the problem could be solved by declaring resu_name and resu_latex_name in the wedge method of the manifolds/differentiable/tensorfield.py file as "None" in the very beginning. In fact, solving this is crucial for calculations with mixed differential forms and its matrices in order to compute the characteristic classes.What is the next step? Create a ticket?Also, I like to discuss my developement so far. But that might be better for another thread.
### Michi
Mar 30, 2019, 10:39:51 AM3/30/19
to sage-devel
Or, maybe, a better approach would be a direct manipulation via resu._name and resu._latex_name.
However, I'm not familiar with the procedure. Furthermore, is this mailing list right the place to discuss my written code?
### Nils Bruin
Mar 30, 2019, 3:09:21 PM3/30/19
to sage-devel
On Saturday, March 30, 2019 at 7:39:51 AM UTC-7, YoungMath wrote:
Or, maybe, a better approach would be a direct manipulation via resu._name and resu._latex_name.
However, I'm not familiar with the procedure. Furthermore, is this mailing list right the place to discuss my written code?
The code you give doesn't reproduce the error you list (I get "y is not defined").
I suspect the problem is that the "wedge" code fails to deal with differential forms that don't have their names set. A workaround would be to set (dummy) names on "unnamed". A fix would consist of letting "wedge" deal with differentials that don't have names set. That's ticket-worthy. Probably the author of the file can fix this pretty quickly (and bang his head on the desk a couple of times in the process).
The doctest example that comes with the function (which you have adapted into your example by the looks of it) shows that "wedge" does work for (at least some) differentials that do have a name.
### Michael Jung
Mar 30, 2019, 3:23:35 PM3/30/19
Oh yes, the chart is wrongly defined. It should be:
sage: c_xy.<x,y> = U.chart()
obviously. I copied it once, changed the variables, and copied it back changing the variables again. Sorry.
What are appropriate settings for a ticket regarding this issue? I'm a ticket virgin.
--
You received this message because you are subscribed to a topic in the Google Groups "sage-devel" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sage-devel/EfLYpAxl_jU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sage-devel+...@googlegroups.com.
To post to this group, send email to sage-...@googlegroups.com.
Virenfrei. www.avast.com
### Nils Bruin
Mar 30, 2019, 3:35:19 PM3/30/19
to sage-devel
Type: defect
(you can leave it major, unless you really feel it's minor)
Component: Manifolds, but that doesn't exist, so Geometry I guess
cc likely authors to speed things up (Copyright in the file suggests
Gourgoulhon and/or Scrimshaw )
post a link to the ticket here when you've filed. That makes it easier for the authors to find the ticket.
Make sure that your example exhibits the desired behaviour in a clean sage session (i.e., test it by pasting it in, just as a potential dev would do). If your code is buggy, people probably decide their time is better spent on another ticket: strictly speaking, your ticket is invalid in such a case, because you're not illustrating the behaviour you claim.
### MJ
Mar 30, 2019, 3:49:37 PM3/30/19
to sage-devel
Yes, of course. I'm just too impatient for tasks like this. Here's the ticket:
### Eric Gourgoulhon
Mar 30, 2019, 7:43:16 PM3/30/19
to sage-devel
Hi,
Thanks MJ for reporting the bug and opening the ticket.
It is fixed in the ticket branch now.
Nils: yes I banged my head on the desk a couple of times :-)
Best wishes,
Eric.
### Eric Gourgoulhon
Mar 31, 2019, 5:36:32 AM3/31/19
to sage-devel
PS: if you have SageMath 8.7 installed and want to use the fix introduced in https://trac.sagemath.org/ticket/27576, it is quite easy: simply open a console at the SageMath root directory and type:
|
2021-12-03 05:44:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43770328164100647, "perplexity": 6524.058763105442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362589.37/warc/CC-MAIN-20211203030522-20211203060522-00417.warc.gz"}
|
https://scipost.org/SciPostPhysProc.1.012
|
## Tau polarimetry in B meson decays
Rodrigo Alonso, Jorge Martin Camalich, Susanne Westhoff
SciPost Phys. Proc. 1, 012 (2019) · published 19 February 2019
### Proceedings event
The 15th International Workshop on Tau Lepton Physics
### Abstract
This article summarizes recent developments in $B\to D^{(\ast)}\tau\nu$ decays. We explain how to extract the tau lepton's production properties from the kinematics of its decay products. The focus is on hadronic tau decays, which are most sensitive to the tau polarizations. We present new results for effects of new physics in tau polarization observables and quantify the observation prospects at BELLE II.
### Authors / Affiliations: mappings to Contributors and Organizations
See all Organizations.
Funders for the research work leading to this publication
|
2019-09-21 06:44:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5394614338874817, "perplexity": 7847.814375371665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574286.12/warc/CC-MAIN-20190921063658-20190921085658-00055.warc.gz"}
|
https://forum.allaboutcircuits.com/threads/wiring-a-5a-usb-charging-port-into-a-12v-circuit.157061/
|
# Wiring a 5A USB charging port into a 12V circuit
#### EnJoneer
Joined Oct 11, 2016
7
Hi,
This is my first time posting here but I think I might need to ask for a lot of help with an upcoming project of mine.
I have a Huawei P20 Pro and it supports 5A supercharging. I want to wire a 5A USB port into a 12V system I am designing for a campervan conversion. The reason being that I don't want to have to spend loads of money on an inverter for mains power if I don't need to.
I want to know whether it is possible to purchase a 5A USB charging port, and if not then how I might get around this and make my own? I will be running the 12V system off a 12V MPPT controller with a solar feed.
Sorry if my question is a little dumb, I hope you guys can give me some help!!
#### mvas
Joined Jun 19, 2017
538
Can you provide a URL link to the exact device that you are discussing?
Do you have a 5 Amp USB Charger, that only operates from 120 Volts,
and now you need a 5 Amp USB Charger that operates from 12 Volts?
Is there a special protocol for the 5 Amp USB Charger for your Huawei P20 Pro device?
Last edited:
#### mvas
Joined Jun 19, 2017
538
You can buy a small 12 Volt DC to 120 Volt AC Pure Sine Wave Inverter for (approx) $30.00 #### iONic Joined Nov 16, 2007 1,650 I can understand why you don't want to spend much money after paying for that smartphone. This will do everything for you! 120VAC and USB ports with 4.2A output for$30 just as mvas stated. Trying to design and build something will almost surely cost you more $$and certainly far more time! The Solution #### mvas Joined Jun 19, 2017 538 I can understand why you don't want to spend much money after paying for that smartphone. This will do everything for you! 120VAC and USB ports with 4.2A output for 30 just as mvas stated. Trying to design and build something will almost surely cost you more$$ and certainly far more time!
The Solution
"5 Amp super-charging" over USB is not a Standard, right?
How exactly did they accomplish that?
I would not what to connect any non-certified ( prototype ) charging circuit to my cell phone.
#### iONic
Joined Nov 16, 2007
1,650
"5 Amp super-charging" over USB is not a Standard, right?
How exactly did they accomplish that?
I would not what to connect any non-certified ( prototype ) charging circuit to my cell phone.
The manual I have for a Huawei P20 Pro doesn't even mention "supercharging." Has the battery technology advanced to safely do this? And how good is it for the battery in the long run?
And No, EdJoneer, the question is not dumb.
Last edited:
#### EnJoneer
Joined Oct 11, 2016
7
Can you provide a URL link to the exact device that you are discussing?
Do you have a 5 Amp USB Charger, that only operates from 120 Volts,
and now you need a 5 Amp USB Charger that operates from 12 Volts?
Is there a special protocol for the 5 Amp USB Charger for your Huawei P20 Pro device?
^^ this is the exact starter kit I will be using. I want to be able to save myself some money and remove the need for an inverter in the system by having a 12V USB charge port for my phone. My phone supports 5A supercharging at 230V with the standard phone charger, I just want to wire a USB charging port into the 12V system that supports the 5A charging capability, but I can't find any products capable of doing so online.
As for the special protocol... I honestly have no idea, I was hoping to find info out from you intelligent people.
Thanks for the quick reply!
#### EnJoneer
Joined Oct 11, 2016
7
You can buy a small 12 Volt DC to 120 Volt AC Pure Sine Wave Inverter for (approx) $30.00 I have considered this, they just looked a bit dodgy. Then again, whatever I end up wiring might not be much better! Thread Starter #### EnJoneer Joined Oct 11, 2016 7 I can understand why you don't want to spend much money after paying for that smartphone. This will do everything for you! 120VAC and USB ports with 4.2A output for$30 just as mvas stated. Trying to design and build something will almost surely cost you more and certainly far more time!
The Solution
I was considering buying one of these but had read that some devices have complications working from anything but a pure sine wave inverter?
Also, would it cause any problems to just splice the 12V cable on this and wire it into a 12V system directly?
Thanks for your help!
#### EnJoneer
Joined Oct 11, 2016
7
"5 Amp super-charging" over USB is not a Standard, right?
How exactly did they accomplish that?
I would not what to connect any non-certified ( prototype ) charging circuit to my cell phone.
This was a concern of mine, but if the phone supports up to 5A charging, then surely it would be fine providing it is done correctly?
#### sparky 1
Joined Nov 3, 2018
32
Beside selecting the DC to DC step down converter there are a few automotive electric routines to become familiar with. Like the T connector splits off where the interior dome light is located at the fuse box, because it is live when the key is off.
#### EnJoneer
Joined Oct 11, 2016
7
The manual I have for a Huawei P20 Pro doesn't even mention "supercharging." Has the battery technology advanced to safely do this? And how good is it for the battery in the long run?
And No, EdJoneer, the question is not dumb.
I noticed this when I started looking for specs online, but it was one of the key selling points when I was looking for a new phone - the phone will charge from completely flat in 30-45 mins.
I'm assuming it is safe as it is the standard charger sold by Huawei for this model, and the phone battery has worked perfectly without any noticeable deterioration for several months now, so it seems as though it is fine in the long run. Its a great phone, I am really pleased with it.
#### iONic
Joined Nov 16, 2007
1,650
I was considering buying one of these but had read that some devices have complications working from anything but a pure sine wave inverter?
Also, would it cause any problems to just splice the 12V cable on this and wire it into a 12V system directly?
Thanks for your help!
Have you seen this?
5V/5A 30W Quick Charger
|
2020-01-27 10:04:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31211885809898376, "perplexity": 2394.2192842498953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251696046.73/warc/CC-MAIN-20200127081933-20200127111933-00387.warc.gz"}
|
https://swarm-lab.github.io/swaRm/reference/chullPerimeter.html
|
Given a set of cartesian coordinates, this function determines the perimeter of the convex hull (or envelope) of the set.
chullPerimeter(x, y, geo = FALSE)
## Arguments
x A vector of x (or longitude) coordinates. A vector of y (or latitude) coordinates. A logical value indicating whether the locations are defined by geographic coordinates (pairs of longitude/latitude values). If TRUE, the perimeter is returned as meters. If FALSE, it is returned as units of the [x,y] coordinates. Default: FALSE.
## Value
A single numeric value corresponding to the perimeter of the convex hull (in meters if geo is TRUE).
## Examples
# TODO
|
2020-08-09 17:15:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4456968307495117, "perplexity": 2028.8266296156805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738562.5/warc/CC-MAIN-20200809162458-20200809192458-00437.warc.gz"}
|
https://budgetmodel.wharton.upenn.edu/issues/2019/6/13/pwbm-projections-in-line-with-official-government-estimates
|
Top
# PWBM Projections In-Line with Official Government Estimates
In the Congressional Research Service’s report on the economic effects of the 2017 tax bill, Senior Specialist in Economic Policy Jane Gravelle and Specialist in Public Finance Donald Marples analyzed the effects of the Tax Cuts and Jobs Act (TCJA) on output and growth. They referred to the Congressional Budget Office’s (CBO) long-term forecast that projects that the TCJA has a positive effect on GDP, particularly in earlier years.
The CBO projects that, under the TCJA, after 10 years, in 2027, GDP will be 0.6 percent larger than otherwise. This estimate of the effect of the TCJA on GDP falls within the range that PWBM projected of between 0.6 percent and 1.1 percent. PWBM’s analysis of the act shows that over the same time period debt increases between $1.9 trillion and$2.2 trillion, inclusive of economic growth. Similar to the CBO, PWBM finds that while the TCJA increases the growth rate of GDP, that boost fades over time. The additional growth is not enough to pay for the cuts.
|
2019-08-18 13:01:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31443098187446594, "perplexity": 5643.657567010529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313889.29/warc/CC-MAIN-20190818124516-20190818150516-00131.warc.gz"}
|
https://egtheory.wordpress.com/tag/fitness-ontology/page/2/
|
Measuring games in the Petri dish
For the next couple of months, Jeffrey Peacock is visiting Moffitt. He’s a 4th year medical student at the University of Central Florida with a background in microbiology and genetic engineering of bacteria and yeast. Together with Andriy Marusyk and Jacob Scott, he will move to human cells and run some in vitro experiments with non-small cell lung cancer — you can read more about this on Connecting the Dots. Robert Vander Velde is also in the process of designing some experiments of his own. Both Jeff and Robert are interested in evolutionary game theory, so this is great opportunity for me to put my ideas on operationalization of replicator dynamics into practice.
In this post, I want to outline the basic process for measuring a game from in vitro experiments. Games in the Petri-dish. It won’t be as action packed as Agar.io — that’s an actual MMO cells-in-Petri-dish game; play here — but hopefully it will be more grounded in reality. I will introduce the gain function, show how to measure it, and stress the importance of quantifying the error on this measurement. Since this is part of the theoretical preliminaries for my collaborations, we don’t have our own data to share yet, so I will provide an illustrative cartoon with data from Archetti et al. (2015). Finally, I will show what sort of data would rule-out the theoretician’s favourite matrix games and discuss the ego-centric representation of two-strategy matrix games. The hope is that we can use this work to go from heuristic guesses at what sort of games microbes or cancer cells might play to actually measuring those games.
Abusing numbers and the importance of type checking
What would you say if I told you that I could count to infinity on my hands? Infinity is large, and I have a typical number of fingers. Surely, I must be joking. Well, let me guide you through my process. Since you can’t see me right now, you will have to imagine my hands. When I hold out the thumb on my left hand, that’s one, and when I hold up the thumb and the index finger, that’s two. Actually, we should be more rigorous, since you are imagining my fingers, it actually isn’t one and two, but i and 2i. This is why they call them imaginary numbers.
Let’s continue the process of extending my (imaginary) fingers from the leftmost digits towards the right. When I hold out my whole left hand and the pinky, ring, and middle fingers on my right hand, I have reached 8i.
But this doesn’t look like what I promised. For the final step, we need to remember the geometric interpretation of complex numbers. Multiplying by i is the same thing as rotating counter-clockwise by 90 degrees in the plane. So, let’s rotate our number by 90 degrees and arrive at $\infty$.
I just counted to infinity on my hands.
Of course, I can’t stop at a joke. I need to overanalyze it. There is something for scientists to learn from the error that makes this joke. The disregard for the type of objects and jumping between two different — and usually incompatible — ways of interpreting the same symbol is something that scientists, both modelers and experimentalists, have to worry about it.
If you want an actually funny joke of this type then I recommend the image of a ‘rigorous proof’ above that was tweeted by Moshe Vardi. My writen version was inspired by a variant on this theme mentioned on Reddit by jagr2808.
I will focus this post on the use of types from my experience with stoichiometry in physics. Units in physics allow us to perform sanity checks after long derivations, imagine idealized experiments, and can even suggest refinements of theory. These are all features that evolutionary game theory, and mathematical biology more broadly, could benefit from. And something to keep in mind as clinicians, biologists, and modelers join forces this week during the 5th annual IMO Workshop at the Moffitt Cancer Center.
Operationalizing the local environment for replicator dynamics
Recently, Jake Taylor-King arrived in Tampa and last week we were brainstorming some projects to work on together. In the process, I dug up an old idea I’ve been playing with as my understanding of the Ohtsuki-Nowak transform matured. The basic goal is to work towards an operational account of spatial structure without having to commit ourselves to a specific model of space. I will take replicator dynamics and work backwards from them, making sure that each term we use can be directly measured in a single system or abducted from the other measurements. The hope is that if we start making such measurements then we might see some empirical regularities which will allow us to link experimental and theoretical models more closely without having to make too many arbitrary assumptions. In this post, I will sketch the basic framework and then give an example of how some of the spatial features can be measured from a sample histology.
Operationalizing replicator dynamics and partitioning fitness functions
As you know, dear regular reader, I have a rather uneasy relationship with reductionism, especially when doing mathematical modeling in biology. In mathematical oncology, for example, it seems that there is a hope that through our models we can bring a more rigorous mechanistic understanding of cancer, but at the same time there is the joke that given almost any microscopic mechanism there is an experimental paper in the oncology literature supporting it and another to contradict it. With such a tenuous and shaky web of beliefs justifying (or just hinting towards) our nearly arbitrary microdynamical assumptions, it seems unreasonable to ground our models in reductionist stories. At such a time of ontological crisis, I have an instinct to turn — much like many physicists did during a similar crisis at the start of the 20th century in their discipline — to operationalism. Let us build a convincing mathematical theory of cancer in the petri dish with as few considerations of things we can’t reliably measure and then see where to go from there. To give another analogy to physics in the late 1800s, let us work towards a thermodynamics of cancer and worry about its many possible statistical mechanics later.
This is especially important in applications of evolutionary game theory where assumptions abound. These assumptions aren’t just about modeling details like the treatments of space and stochasticity or approximations to them but about if there is even a game taking place or what would constitute a game-like interaction. However, to work toward an operationalist theory of games, we need experiments that beg for EGT explanations. There is a recent history of these sort of experiments in viruses and microbes (Lenski & Velicer, 2001; Crespi, 2001; Velicer, 2003; West et al., 2007; Ribeck & Lenski, 2014), slime molds (Strassmann & Queller, 2011) and yeast (Gore et al., 2009; Sanchez & Gore, 2013), but the start of these experiments in oncology by Archetti et al. (2015) is current events[1]. In the weeks since that paper, I’ve had a very useful reading group and fruitful discussions with Robert Vander Velde and Julian Xue about the experimental aspects of this work. This Monday, I spent most of the afternoon discussing similar experiments with Robert Noble who is visiting Moffitt from Montpellier this week.
In this post, I want to unlock some of this discussion from the confines of private emails and coffee chats. In particular, I will share my theorist’s cartoon understanding of the experiments in Archetti et al. (2015) and how they can help us build an operationalist approach to EGT but how they are not (yet) sufficient to demonstrate the authors’ central claim that neuroendocrine pancreatic cancer dynamics involve a public good.
Evolutionary game theory without interactions
When I am working on evolutionary game theory, I usually treat the models I build as heuristics to guide intuitions and push the imagination. But working on something as practical as cancer, and being in a department with many physics-trained colleagues puts pressure on me to think of moving more towards insilications or abductions. Now, Philip Gerlee and Philipp Altrock are even pushing me in that direction with their post on TheEGG. So this entry might seem a bit uncharacteristic, I will describe an experiment — at least as a theorist like me imagines them.
Consider the following idealized protocol that is loosely inspired by Archetti et al. (2015) and the E. coli Long-term evolution experiment (Lenski et al., 1991; Wiser et al., 2013; Ribeck & Lenski, 2014). We will (E1) take a new petri dish or plate; (E2) fill it with a fixed mix of nutritional medium like fetal bovine serum; (E3) put a known number N of two different cell types A and B on the medium (on the first plate we will also know the proportion of A and B in the mixture); (E4) let them grow for a fixed amount of time T which will be on the order of a cell cycle (or two); (E5) scrape the cells off the medium; and (E6) return to step (E1) while selecting N cells at random from the ones we got in step (E5) to seed step (E3). Usually, you would use this procedure to see how A-cells and B-cells compete with each other, as Archetti et al. (2015). However, what would it look like if the cells don’t compete with each other? What if they produce no signalling molecules — in fact, if they excrete nothing into the environment, to avoid cross-feeding interactions — and don’t touch each other? What if they just sit there independently eating their very plentiful nutrient broth?[1]
Would you expect to see evolutionary game dynamics between A and B? Obviously, since I am asking, I expect some people to answer ‘no’ and then be surprised when I derive some math to show that the answer can be ‘yes’. So, dear reader, humour me by being surprised.
Memes, compound strategies, and factoring the replicator equation
When you work with evolutionary game theory for a while, you end up accumulating an arsenal of cute tools and tricks. A lot of them are obvious once you’ve seen them, but you usually wouldn’t bother looking for them if you hadn’t know they existed. In particular, you become very good friends with the replicator equation. A trick that I find useful at times — and that has come up recently in my on-going project with Robert Vander Veldge, David Basanta, and Jacob Scott — is nesting replicator dynamics (or the dual notion of factoring the replicator equation). I wanted to share a relatively general version of this trick with you, and provide an interpretation of it that is of interest to people — like me — who care about the interaction of evolution in learning. In particular, we will consider a world of evolving agents where each agent is complex enough to learn through reinforcement and pass its knowledge to its offspring. We will see that in this setting, the dynamics of the basic ideas — or memes — that the agents consider can be studied in a world of selfish memes independent of the agents that host them.
Approximating spatial structure with the Ohtsuki-Nowak transform
Can we describe reality? As a general philosophical question, I could spend all day discussing it and never arrive at a reasonable answer. However, if we restrict to the sort of models used in theoretical biology, especially to the heuristic models that dominate the field, then I think it is relatively reasonable to conclude that no, we cannot describe reality. We have to admit our current limits and rely on thinking of our errors in the dual notions of assumptions or approximations. I usually prefer the former and try to describe models in terms of the assumptions that if met would make them perfect (or at least good) descriptions. This view has seemed clearer and more elegant than vague talk of approximations. It is the language I used to describe the Ohtsuki-Nowak (2006) transform over a year ago. In the months since, however, I’ve started to realize that the assumptions-view is actually incompatible with much of my philosophy of modeling. To contrast my previous exposition (and to help me write up some reviewer responses), I want to go through a justification of the ON-transform as a first-order approximation of spatial structure.
Evolution as a risk-averse investor
I don’t know about you, but most of my money is in my savings account and not in more volatile assets like property, bonds, or stocks. This is a consequence of either laziness to explore my options, or — the more comforting alternative — extreme risk-aversion. Although it would be nice to have a few thousand dollars more to my name, it would be devastating to have a few thousand dollars less. As such if I was given a lottery where I had a 50% chance of loosing $990 or a 50% chance of winning$1000 then I would probably choose not to play, even though there is an expected gain of $10; I am risk averse, the extra variance of the bet versus the certainty of maintaining my current holdings is not worth$10 for me. I most cases, so are most investors, although the degree of expected profit to variance trade-off differs between agents.
Daniel Bernoulli (8 February 1700 – 17 March 1782) was one of the mathematicians in the famous Bernoulli family of Basal, Switzerland, and contemporary and friend of Euler and Goldbach. He is probably most famous for Bernoulli’s principle in hydrodynamics that his hyper-competitive father Johann publishing in a book he pre-dated by ten years to try and claim credit. One of Daniel’s most productive times was working alongside Euler and Goldbach in the golden days (1724-1732) of the St. Petersburg Academy. It was in Russia that he developed his solution to the St. Petersburg paradox by introducing risk-aversion, and made his contribution to probability, finance, and — as we will see — evolution.
Black swans and Orr-Gillespie theory of evolutionary adaptation
The internet loves fat tails, it is why awesome things like wikipedia, reddit, and countless kinds of StackExchanges exist. Finance — on the other hand — hates fat tails, it is why VaR and financial crises exist. A notable exception is Nassim Taleb who became financially independent by hedging against the 1987 financial crisis, and made a multi-million dollar fortune on the recent crisis; to most he is known for his 2007 best-selling book The Black Swan. Taleb’s success has stemmed from his focus on highly unlikely events, or samples drawn from far on the tail of a distribution. When such rare samples have a large effect then we have a Black Swan event. These are obviously important in finance, but Taleb also stresses its importance to the progress of science, and here I will sketch a connection to the progress of evolution.
|
2020-09-19 16:08:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.523240864276886, "perplexity": 919.7950049915999}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192778.51/warc/CC-MAIN-20200919142021-20200919172021-00593.warc.gz"}
|
https://asmedigitalcollection.asme.org/heattransfer/article-abstract/132/11/112601/455973/Vadasz-Number-Influence-on-Vibration-in-a-Rotating?redirectedFrom=PDF
|
We consider vibration effects on the classical Rayleigh–Be’nard problem and the classical Vadasz (1994, “Stability of Free Convection in a Narrow Porous Layer Subject to Rotation,” Int. Commun. Heat Mass Transfer, 21, pp. 881–890) problem, which includes rotation of a vertical porous layer about the $z$-axis. In particular, we focus on the influence of the Vadasz number on vibration for small to moderate and large Vadasz numbers. For small to moderate Vadasz numbers, we develop an analogy between the Vadasz problem (Vadasz, 1994, “Stability of Free Convection in a Narrow Porous Layer Subject to Rotation,” Int. Commun. Heat Mass Transfer, 21, pp. 881–890) placed far away from the axis of rotation and classical Rayleigh–Be’nard problem, both of which include the effects of vibration. It is shown here that the stability criteria are identical to the Rayleigh–Be’nard problem with vibration when $g∗=ω∗2X0∗$. The analysis for the large Vadasz number scaling indicates that a frozen time approximation is appropriate where the effect of vibration is modeled as small variations in the Rayleigh number definition.
1.
Nield
,
D. A.
, and
Bejan
,
A.
, 1995,
Convection in Porous Media
,
Wiley
,
New York
.
2.
Chandrasekar
,
S.
, 1961,
Hydrodynamic and Hydromagnetic Stability
,
Oxford University Press
,
London, UK
.
3.
,
P.
, 1994, “
Stability of Free Convection in a Narrow Porous Layer Subject to Rotation
,”
Int. Commun. Heat Mass Transfer
0735-1933,
21
, pp.
881
890
.
4.
,
P.
, 1996, “
Convection and Stability in a Rotating Porous Layer With Alternating Direction of the Centrifugal Body Force
,”
Int. J. Heat Mass Transfer
0017-9310,
39
(
8
), pp.
1639
1647
.
5.
,
P.
, and
Govender
,
S.
, 1998, “
Two-Dimensional Convection Induced by Gravity and Centrifugal Forces in a Rotating Porous Layer Far Away From the Axis of Rotation
,”
Int. J. Rotating Mach.
1023-621X,
4
(
2
), pp.
73
90
.
6.
,
P.
, and
Govender
,
S.
, 2001, “
Stability and Stationary Convection Induced by Gravity and Centrifugal Forces in a Rotating Porous Layer Distant From the Axis of Rotation
,”
Int. J. Eng. Sci.
0020-7225,
39
, pp.
715
732
.
7.
Govender
,
S.
, 2003, “
Oscillatory Convection Induced by Gravity and Centrifugal Forces in a Rotating Porous Layer Distant From the Axis of Rotation
,”
Int. J. Eng. Sci.
0020-7225,
41
(
6
), pp.
539
545
.
8.
,
P.
, 1998, “
Coriolis Effect on Gravity-Driven Convection in a Rotating Porous Layer Heated From Below
,”
J. Fluid Mech.
0022-1120,
376
, pp.
351
375
.
9.
Gresho
,
P. M.
, and
Sani
,
R. L.
, 1970, “
The Effects of Gravity Modulation on the Stability of a Heated Fluid Layer
,”
J. Fluid Mech.
0022-1120,
40
, pp.
783
806
.
10.
,
M.
, and
Roux
,
B.
, 1988, “
Natural Convection in a Long Vertical Cylinder Under Gravity Modulation
,”
J. Fluid Mech.
0022-1120,
193
, pp.
391
415
.
11.
Christov
,
C. I.
, and
Homsy
,
G. M.
, 2001, “
Nonlinear Dynamics of Two-Dimensional Convection in a Vertically Stratified Slot With and Without Gravity Modulation
,”
J. Fluid Mech.
0022-1120,
430
, pp.
335
360
.
12.
Hirata
,
K.
,
Sasaki
,
T.
, and
Tanigawa
,
H.
, 2001, “
Vibrational Effect on Convection in a Square Cavity at Zero Gravity
,”
J. Fluid Mech.
0022-1120,
455
, pp.
327
344
.
13.
Govender
,
S.
, 2004, “
Stability of Convection in a Gravity Modulated Porous Layer Heated From Below
,”
Transp. Porous Media
0169-3913,
57
(
1
), pp.
113
123
.
14.
Govender
,
S.
, 2005, “
Destabilising a Fluid Saturated Gravity Modulated Porous Layer Heated From Above
,”
Transp. Porous Media
0169-3913,
59
(
2
), pp.
215
225
.
15.
Govender
,
S.
, 2005, “
Weak Non-Linear Analysis of Convection in a Gravity Modulated Porous Layer
,”
Transp. Porous Media
0169-3913,
60
(
1
), pp.
33
42
.
16.
Bardan
,
G.
, and
Mojtabi
,
A.
, 2000, “
On the Horton–Rogers–Lapwood Convective Instability With Vertical Vibration
,”
Phys. Fluids
0031-9171,
12
, pp.
2723
2731
.
17.
Pedramrazi
,
Y.
,
Maliwan
,
K.
,
Charrier–Mojtabi
,
M. C.
, and
Mojtabi
,
A.
, 2005, “
Influence of Vibration on the Onset of Thermoconvection in Porous Medium
,”
Handbook of Porous Media
,
Marcel Dekker
,
New York
, pp.
321
370
.
18.
Govender
,
S.
, 2005, “
Linear Stability and Convection in a Gravity Modulated Porous Layer Heated From Below: Transition From Synchronous to Subharmonic Solution
,”
Transp. Porous Media
0169-3913,
59
(
2
), pp.
227
238
.
19.
Kuznetsov
,
A. V.
, 2006, “
Linear Stability Analysis of the Effect of Vertical Vibration on Bioconvection in a Horizontal Porous Layer of Finite Depth
,”
J. Porous Media
1091-028X,
9
, pp.
597
608
.
20.
Straughan
,
B.
, 2000, “
A Sharp Nonlinear Stability Threshold in Rotating Porous Convection
,”
Proc. R. Soc. London, Ser. A
0950-1207,
457
, pp.
87
93
.
21.
McLachlan
,
N. W.
, 1964,
Theory and Application of Mathieu Functions
,
Dover
,
New York
.
|
2021-11-28 00:30:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40167587995529175, "perplexity": 9235.605265250388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358323.91/warc/CC-MAIN-20211127223710-20211128013710-00131.warc.gz"}
|
https://deepai.org/publication/synthesis-of-shared-control-protocols-with-provable-safety-and-performance-guarantees
|
# Synthesis of Shared Control Protocols with Provable Safety and Performance Guarantees
We formalize synthesis of shared control protocols with correctness guarantees for temporal logic specifications. More specifically, we introduce a modeling formalism in which both a human and an autonomy protocol can issue commands to a robot towards performing a certain task. These commands are blended into a joint input to the robot. The autonomy protocol is synthesized using an abstraction of possible human commands accounting for randomness in decisions caused by factors such as fatigue or incomprehensibility of the problem at hand. The synthesis is designed to ensure that the resulting robot behavior satisfies given safety and performance specifications, e.g., in temporal logic. Our solution is based on nonlinear programming and we address the inherent scalability issue by presenting alternative methods. We assess the feasibility and the scalability of the approach by an experimental evaluation.
## Authors
• 24 publications
• 12 publications
• 61 publications
• ### Synthesis of Provably Correct Autonomy Protocols for Shared Control
We synthesize shared control protocols subject to probabilistic temporal...
05/15/2019 ∙ by Murat Cubuktepe, et al. ∙ 0
• ### Tosca: Operationalizing Commitments Over Information Protocols
The notion of commitment is widely studied as a high-level abstraction f...
08/10/2017 ∙ by Thomas C. King, et al. ∙ 0
• ### Maximum Realizability for Linear Temporal Logic Specifications
Automatic synthesis from linear temporal logic (LTL) specifications is w...
04/02/2018 ∙ by Rayna Dimitrova, et al. ∙ 0
• ### Automatic Trajectory Synthesis for Real-Time Temporal Logic
Many safety-critical systems must achieve high-level task specifications...
09/14/2020 ∙ by Rafael Rodrigues da Silva, et al. ∙ 0
• ### Opportunistic Synthesis in Reactive Games under Information Asymmetry
Reactive synthesis is a class of methods to construct a provably-correct...
06/13/2019 ∙ by Abhishek N. Kulkarni, et al. ∙ 0
• ### Decomposing GR(1) Games with Singleton Liveness Guarantees for Efficient Synthesis
Temporal logic based synthesis approaches are often used to find traject...
09/20/2017 ∙ by Sumanth Dathathri, et al. ∙ 0
• ### Control Synthesis using Signal Temporal Logic Specifications with Integral and Derivative Predicates
In many applications, the integrals and derivatives of signals carry val...
03/26/2021 ∙ by Ali Tevfik Buyukkocak, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## I Introduction
We study the problem of shared control, where a robot shall accomplish a task according to a human operator’s goals and given specifications addressing safety or performance. Such scenarios are for instance found in remotely operated semi-autonomous wheelchairs [11]. In a nutshell, the human has a certain action in mind and issues a command. Simultaneously, an autonomy protocol provides—based on the available information—another command. These commands are blended—also referred to as arbitrated—and deployed to the robot.
Earlier work discusses shared control from different perspectives [7, 8, 20, 19, 13, 10], however, formal correctness in the sense of ensuring safety or optimizing performance has not been considered. In particular, having the human as an integral factor in this scenario, correctness needs to be treated in an appropriate way as a human might not be able to comprehend factors of a system and—in the extremal case—can drive a system into inevitable failure.
There are several things to discuss. First, a human might not be sure about which command to take, depending on the scenario or factors like fatigue or incomprehensibility of the problem. We account for uncertainties in human decisions by introducing randomness to choices. Moreover, a means of actually interpreting a command is needed in form of a user interface, , a brain-computer interface; the usually imperfect interpretation adds to the randomness. We call a formal interpretation of the human’s commands the human strategy (this concept will be explained later).
As many formal system models are inherently stochastic, our natural formal model for robot actions inside an environment is a Markov decision process (MDP) where deterministic action choices induce probability distributions over system states. Randomness in the choice of actions, like in the human strategy, is directly carried over to these probabilities when resolving nondeterminism. For MDPs, quantitative properties like “the probability to reach a bad state is lower than
” or “the cost of reaching a goal is below a given threshold” can be formally verified. If a set of such specifications is satisfied for the human strategy and the MDP, the task can be carried out safely and with good performance.
Given that the human strategy induces certain critical actions with a high probability, one or more specifications might be refuted. In this case, the autonomy should provide an alternative strategy that—when blended with the human strategy—satisfies the specifications without discarding too much of the human’s choices. As in [8], the blending puts weight on either the human’s or the autonomy protocol’s choices depending on factors such as the confidence of the human or the level of information the autonomy protocol has at its disposal.
The question is now how such a human strategy can be obtained. It seems unrealistic that a human can comprehend an MDP modeling a realistic scenario in the first place; primarily due the possibly very large size of the state space. Moreover, a human might not be good at making sense of probabilities or cost of visiting certain states at all. We employ learning techniques to collect data about typical human behavior. This can, for instance, be performed within a simulation environment. In our case study, we model a typical shared control scenario based on a wheelchair [11] where a human user and an autonomy protocol share the control responsibility. Having a human user solving a task, we compute strategies from the obtained data using
inverse reinforcement learning
[16, 1]. Thereby, we can give guarantees on how good the obtained strategy approximates the actual intends of the user.
The design of the autonomy protocol is the main concern of this paper. We define the underlying problem as a nonlinear optimization problem and propose a technique to address the consequent scalability issues by reducing the problem to a linear optimization problem. After an autonomy protocol is synthesized, guarantees on safety and performance can be given assuming that the user behaves according to the human strategy obtained beforehand. The main contribution is a formal framework for the problem of shared autonomy together with thorough discussions on formal verification, experiments, and current pitfalls. A summary of the approaches and an outline are given in Section II.
Shared control has attracted considerable attention recently. We only overview some recent approaches into context with our results. First, Dragan and Srinivasa discussed strategy blending for shared control in [8, 7]. There, the focus was on the prediction of human goals. Combining these approaches, , by inferring formal safety or performance specifications by prediction of human goals, is an interesting direction for future work. Iturrate et al. presented shared control using feedback based on electroencephalography (a method to record electrical activity of the brain) [13], where a robot is partly controlled via error signals from a brain-computer interface. In [19], Trautman proposes to treat shared control broadly as a random process where different components are modeled by their joint probability distributions. As in our approach, randomness naturally prevents strange effects of blending: Consider actions “up” and “down” to be blended with equally distributed weight without having means to actually evaluating these weights. Finally, in [10] a synthesis method switches authority between a human operator and the autonomy such that satisfaction of linear temporal logic constraints can be ensured.
## Ii Shared control
Consider first Fig. 1 which recalls the general framework for shared autonomy with blending of commands; additionally we have a set of specifications, a formal model for robot behavior, and a blending function. In detail, a robot is to take care of a certain task. For instance, it shall move to a certain landmark. This task is subject to certain performance and safety considerations, , it is not safe to take the shortest route because there are too many obstacles. These considerations are expressed by a set of specifications . The possible behaviors of the robot inside an environment are given by a Markov decision process (MDP) . Having MDPs gives rise to choices of certain actions to perform and to randomness in the environment: A chosen path might induce a high probability to achieve the goal while with a low probability, the robot might slip and therefore fail to complete the task.
Now, in particular, a human user issues a set of commands for the robot to perform. We assume that the commands issued by the human are consistent with an underlying randomized strategy for the MDP . Put differently, at design time we compute an abstract strategy of which the set of human commands is one realization. This modeling way allows to account for a variety of imperfections. Although it is not directly issued by a human, we call this strategy the human strategy. Due to possible human incomprehensibility or lack of detailed information, this leads to the fact that the strategy might not satisfy the requirements.
Now, an autonomy protocol is to be designed such that it provides an alternative strategy , the autonomous strategy. The two strategies are then blended—according to the given blending function into a new strategy which satisfies the specifications. The blending function reflects preference over either the decisions of the human or the autonomy protocol. We also ensure that the blended strategy deviates only minimally from the human strategy. At runtime we can then blend decisions of the human user with decisions based on the autonomous strategy. The resulting “blended” decisions are according to the blended strategy , thereby ensuring satisfaction of the specifications. This procedure, while involving expensive computations at design time, is very efficient at runtime.
Summarized, the problem we are addressing in this paper is then—in addition to the proposed modeling of the scenario—to synthesize the autonomy protocol in a way such that the resulting blended strategy meets all of the specifications while it only deviates from the human strategy as little as possible. We introduce all formal foundations that we need in Section III. The shared control synthesis problem with all needed formalisms is presented in Section IV as being a nonlinear optimization problem. Addressing scalability, we reduce the problem to a linear optimization problem in Section V. We indicate the feasibility and scalability of our techniques using data-based experiments in Section V and draw a short conclusion in Section VII.
## Iii Preliminaries
#### Iii-1 Models
A probability distribution over a finite or countably infinite set is a function with . The set of all distributions on is denoted by .
[MDP] A Markov decision process (MDP) is a tuple with a set of states , a unique initial state , a finite set of actions, and a (partial) probabilistic transition function . MDPs operate by means of nondeterministic choices of actions at each state, whose successors are then determined probabilistically with respect to the associated probability distribution. The enabled actions at state are denoted by . To avoid deadlock states, we assume that for all . A cost function for an MDP adds cost to a transition with . A path in an is a finite (or infinite) sequence with for all . If for all , all actions can be disregarded and the MDP reduces to a
discrete-time Markov chain (MC)
.
The unique probability measure for a set of paths of MC can be defined by the usual cylinder set construction, the expected cost of a set of paths is denoted by , see [2] for details. In order to define a probability measure and expected cost on MDPs, the nondeterministic choices of actions are resolved by so-called strategies. For practical reasons, we restrict ourselves to memoryless strategies, again refer to [2] for details. [Strategy] A randomized strategy for an MDP is a function such that implies . A strategy with for and for all is called deterministic. The set of all strategies over is denoted by . Resolving all nondeterminism for an MDP with a strategy yields an induced Markov chain . Intuitively, the random choices of actions from are transferred to the transition probabilities in . [Induced MC] Let MDP and strategy . The MC induced by and is where
\probmdp\sched(s,s′)=∑\act∈\Act(s)\sched(s)(\act)⋅\probmdp(s,\act)(s′) for all s,s′∈S .
#### Iii-2 Specifications
A quantitative reachability property with upper probability threshold and target set constrains the probability to reach from in to be at most . Expected cost properties impose an upper bound on the expected cost to reach goal states . Intuitively, bad states shall only be reached with probability (safety specification) while the expected cost for reaching goal states has to be below (performance specification). Probability and expected cost to reach from are denoted by and , respectively. Hence, and express that the properties and are satisfied by MC . These concepts are analogous for lower bounds on the probability. We also use until properties of the form expressing that the probability of reaching while not reaching beforehand is at least .
An MDP satisfies both safety specification and performance specification , iff for all strategies it holds that the induced MC satisfies and , i.e., and . If several performance or safety specifications are given MDP , the simultaneous satisfaction for all strategies, denoted by , can be formally verified for an MDP using multi-objective model checking [9].
Here, we are interested in the synthesis problem, where the aim is to find one particular strategy for which the specifications are satisfied. If for and strategy it holds that , then is said to admit the specifications, also denoted by .
Consider Fig. 2(a) depcting MDP with initial state , where states and have choices between actions or and or , respectively. For instance, action induces a probabilistic choice between and with probabilities and . The self loops at and indicate looping back with probability one for each action.
Assume now, a safety specification is given by . The specification is violated for , as the deterministic strategy with and induces a probability of reaching of , see the induced MC in Fig. 2(b). For the randomized strategy with and , which chooses between all actions uniformly, the specification is also violated: The probability of reaching is , hence . However, for the deterministic strategy with and the probability is , thus . Note that minimizes the probability of reaching while maximizes this probability.
## Iv Synthesizing shared control protocols
In this section we describe our formal approach to synthesize a shared control protocol in presence of randomization. We start by formalizing the concepts of blending and strategy perturbation. Afterwards we formulate the general problem and show that the solution to the synthesis problem is correct.
Consider Fig. 3, where a room to navigate in is abstracted into a grid. We will use this as our ongoing example. A wheelchair as in [11] is to be steered from the lower left corner of the grid to the exit on the upper right corner of the grid. There is also an autonomous robotic vacuum cleaner moving around the room; the goal is for the wheelchair to reach the exit without crashing into the vacuum cleaner. We now assume that the vacuum cleaner moves according to probabilities that are fixed according to evidence gathered beforehand; these probabilities are unknown or incomprehensible to the human user. To improve the safety of the wheelchair, it is equipped with an autonomy protocol that is to improve decisions of the human or even overwrite them in case of safety hazards. For the design of the autonomy protocol, the evidence data about the cleaner is present.
Now an obvious strategy to move for the wheelchair, not taking into account the vacuum cleaner, is depicted by the red solid line in Fig. 3(a). As indicated in Fig. 3(b), the strategy proposed by the human is unsafe because there is a high probability to collide with the obstacle. The autonomy protocol computes a safe strategy, indicated by the solid line in Fig. 3(b). As this strategy deviates highly from the human strategy, the dashed line indicates a still safe enough alternative which is a compromise or—in our terminology—a blending between the two strategies. We assume in the following that possible behaviors of the robot inside the environment are modeled by MDP . The human strategy is given as randomized strategy for . We explain how to obtain this strategy in Section VI. Specifications are being either safety properties or performance properties .
### Iv-a Strategy blending
Given two strategies, they are to be blended into a new strategy favoring decisions of one or the other in each state of the MDP. In our setting, the human strategy is blended with the autonomous strategy by means of an arbitrary blending function. In [8] it is argued that blending intuitively reflects the confidence in how good the autonomy protocol is able to assist with respect to the human user’s goals. In addition, factors probably unknown or incomprehensible for the human such as safety or performance optimization also should be reflected by such a function.
Put differently, possible actions of the user should be assigned low confidence by the blending function, if he cannot be trusted to make the right decisions. For instance, recall Example IV. At cells of the grid where with a very high probability the wheelchair might collide with the vacuum cleaner, it makes sense to assign a high confidence in the autonomy protocol’s decisions because not all safety-relevant information is present for the human.
In order to enable formal reasoning together with such a function we instantiate the blending with a state-dependent function which at each state of an MDP weighs the confidence in both the human’s and the autonomy’s decisions. A more fine-grained instantiation might incorporate not only the current state of the MDP but also the strategies of both human and autonomy or history of a current run of the system. Such a formalism is called linear blending and is used in what follows. In [19], additional notions of blending are discussed.
[Linear blending] Given an MDP , two strategies , and a blending function , the blended strategy for all states , and actions is
σha(s)(α)=\blendFunc(s)⋅\schedh(s)(α)+(1−\blendFunc(s))⋅\scheda(s)(α) .
Note that the blended strategy is a well-defined randomized strategy. For each , the value represents the confidence in the human’s decisions at this state, , the “weight” of at .
Coming back to Example IV, the critical cells of the grid correspond to certain states of the MDP ; at these states a very low confidence in the human’s decisions should be assigned. For instance at such a state we might have leading to the fact that all randomized choices of the human strategy are scaled down by this factor. Choices of the autonomous strategy are only scaled down by factor . The addition of these scaled choices then gives a new strategy highly favoring the autonomy’s decisions.
### Iv-B Perturbation of strategies
As mentioned before, we want to ensure that the blended strategy deviates minimally from the human strategy. To now measure such a deviation, we introduce the concept of perturbation which was—on a complexity theoretic level—for instance investigated in [5]. Here, we introduce an additive perturbation for a (randomized) strategy, incrementing or decrementing probabilities of action choices such that a well-defined distribution over actions is maintained. [Strategy perturbation] Given MDP and strategy , an (additive) perturbation is a function with
∑\act∈\Actδ(s,\act)=0 for all s∈S .
The value is called the perturbation value at state for action . Overloading the notation, the perturbed strategy is given by
δ(\sched)(s,\act)=\sched(s)(\act)+δ(s,\act) for % all s∈S and \act∈\Act .
### Iv-C Design of the autonomy protocol
For the formal problem, we are given blending function , specifications , MDP , and human strategy . We assume that does not satisfy all of the specifications, , . The autonomy protocol provides the autonomous strategy . According to , the strategies and are blended into strategy , see Definition IV-A, , . The shared control synthesis problem is to design the autonomy protocol such that for the blended strategy it holds , while minimally deviating from . The deviation from is captured by finding a perturbation as in Definition IV-B, where, , the infinity norm of all perturbation values is minimal.
Our problem involves the explicit computation of a randomized strategy and the induced probabilities, which is inherently nonlinear because the corresponding variables need to be multiplied. Therefore, the canonical formulation is given by a nonlinear optimization program (NLP). We first assume that the only specification is a quantitative reachability property , then we describe how more properties can be included. The program has to encompass defining the autonomous strategy , the perturbation of the human strategy, the blended strategy , and the probability of reaching the set of target states .
We introduce the following specific set of variables:
• for each and define the autonomous strategy and the blended strategy .
• for each and are the perturbation variables for and .
• for each are assigned the probability of reaching from state under strategy .
Using these variables, the NLP reads as follows:
minimize max{|δs\act|∣s∈S,\act∈\Act} (1) subject to p\sinit≤λ (2) ∀s∈T. ps=1 (3) ∀s∈S. ∑\act∈\Act\scheds,\acta=∑\act∈\Act\scheds,\actha=1 (4) ∀s∈S.∀\act∈\Act. \scheds,\actha=\schedh(s)(\act)+δs,\act (5) ∀s∈S. ∑\act∈\Actδs,\act=0 (6) ∀s∈S.∀\act∈\Act. \scheds,\actha=\blendFunc(s)⋅\schedh(s)(\act)+(1−\blendFunc(s))⋅\scheds,\acta (7) ∀s∈S. ps=∑\act∈\Actσs,\actha⋅∑s′∈S\probmdp(s,\act)(s′)⋅ps′ (8)
The NLP works as follows. First, the infinity norm of all perturbation variables is minimized (by minimizing the maximum of all perturbation variables) (1). The probability assigned to the initial state has to be smaller than or equal to to satisfy (2). For all target states , the probability of the corresponding probability variables is assigned one (3). Now, to have well-defined strategies and , we ensure that the assigned values of the corresponding strategy variables at each state sum up to one (4). The perturbation of the human strategy resulting in the strategy as in Definition IV-B is computed using the perturbation variables (5); in order for the perturbation to be well-defined, the variables have to sum up to zero at each state (6). The blending of and with respect to as in Definition IV-A is defined in (7). Finally, the probability to reach from each is computed in (8), defining a non-linear equation system, where action probabilities, given by the induced strategy , are multiplied by probability variables for all possible successors.
Note that this nonlinear program is in fact bilinear due to multiplying the strategy variables with the probability variables (8). The number of constraints is governed by the number of state and action pairs, , the size of the problem is in .
An assignment of real-valued variables is a function ; it is satisfying for a set of (in)equations, if each one evaluates to . A satisfying assignment is minimizing with respect to objective if for there is no other assignment with . Using these notions, we state the correctness of the NLP in (1) – (8).
[Soundness and completeness] The NLP is sound in the sense that each minimizing assignment induces a solution to the shared control synthesis problem. It is complete in the sense that for each solution to the shared control synthesis there is a minimizing assignment of the NLP. Soundness tells that each satisfying assignment of the variables corresponds to strategies and as well as the perturbation as defined above. Moreover, any optimal solution induces a perturbation minimally deviating from the human strategy . Completeness means that all possible solutions of the shared control synthesis problem can be encoded by this NLP. Unsatisfiability means that no such solution exists; the problem is infeasible.
We now explain how the NLP can be extended for further specifications. Assume in addition to , another reachability property with is given. We add another set of probability variables for each state ; (2) is copied for and , (3) is defined for all states and (8) is copied for all , thereby computing the probability of reaching under for all states.
To handle an expected cost property for , we use variables being assigned the expected cost for reaching for all . We add the following equations:
r\sinit≤κ (9) ∀s∈G. rs=0 (10) ∀s∈S. rs=∑\act∈\Act(σs,\actha⋅\rew(s,\act)+∑s′∈S\probmdp(s,\act)(s′)⋅rs′) (11)
First, the expected cost of reaching is smaller than or equal to at (9). Goal state are assigned cost zero (10), otherwise infinite cost is collected at absorbing states. Finally, the expected cost for all other states is computed by (11) where according to the blended strategy the cost of each action is added to the expected cost of the successors. An important insight is that if all specifications are expected reward properties, the program is no longer nonlinear
but a linear program (LP), as there is no multiplication of variables.
### Iv-E Generalized blending
If the problem is not feasible for the given blending function, optionally the autonomy protocol can try to compute a new function for which the altered problem is feasible. We call this procedure generalized blending. The idea is that computing this function gives the designer of the protocol insight on where more confidence needs to be placed into the autonomy or, vice versa, where the human cannot be trusted to satisfy the given specifications.
Computing this new function is achieved by nearly the same NLP as for a fixed blending function while adding variables for each state , defining the new blending function by . We substitute Equation 7 by
∀s∈S.∀\act∈\Act. \scheds,\actha=\blendFuncs⋅\schedh(s)(\act)+(1−\blendFuncs)⋅\scheds,\acta . (12)
A satisfying assignment for the resulting nonlinear program induces a suitable blending function in addition to the strategies. If this problem is also infeasible, there is no strategy that satisfies the given specifications for MDP . If there is no solution for the NLP given by Equations 1 – 12, there is no strategy such that . As there are no restrictions on the blending function, this corollary trivially holds: Consider for instance with for each . This function disregards the human strategy which may be perturbed to each other strategy . Reconsider the MDP from Example III-2 with specification and the randomized strategy
which takes each action uniformly distributed. As we saw,
. We choose this strategy as the human strategy and as the robot MDP. For a blending function putting high confidence in the human, , if for all , the problem is infeasible.
In Table I we display results putting medium (), low (), or no confidence () in the human at and . We list the assignments for the resulting strategies and as well as the probability to reach under the blended strategy . The results were obtained using the NLP solver IPOPT [4].
We observe that for decreasing confidence in the human decisions, the autonomous strategy has higher probabilities for actions and which are the “bad” actions here. That means that—if there is a higher confidence in the autonomy—solutions farer away from the optimum are good enough. The maximal deviation from the human strategy is . Generalized blending with maximizing over the confidence in the human’s decisions at all states yields , , we compute the highest possible confidence in the human’s decisions where the problem is still feasible under the given human strategy.
## V Computationally Tractable Approach
The nonlinear programming approach presented in the previous section gives a rigorous method to solve the shared control synthesis problem and serves as mathematically concise definition of the problem. However, NLPs are known to have severe restrictions in terms of scalability and suffer from numerical instabilities. The crucial point to an efficient solution is circumventing the expensive computation of optimal randomized strategies and reducing the number of variables. We propose a heuristic solution which enables to use linear programming (LP) while ensuring soundness.
We utilize a technique referred to as model repair. Intuitively, an erroneous model is changed such that it satisfies certain specifications. In particular, given a Markov chain and a specification that is violated by , a repair of is an automated method that transforms it to new MC such that is satisfied for . Transforming refers to changing probabilities or cost while regarding certain side constraints such as keeping the original graph structure.
In [3], the first approach to automatically repair an MC model was presented as an NLP. Simulation-based algorithms were investigated in [6]. A heuristic but very scalable technique called local repair was proposed in [17]. This approach greedily changes the probabilities or cost of the original MC until a property is satisfied. An upper bound on changes of probabilities or cost can be specified; correctness and completeness can be given in the sense that if a repair with respect to exists, it will be obtained.
Take now the MC which is induced by the robot MDP and the human strategy . We perform model repair such that the repaired MC satisfies the specifications . The question is now, how from the repaired MC , the strategy can be extracted. More precisely, we need inducing exactly , , , when applied to MDP .
First, we need to make sure that the repaired MC is consistent with the original MDP such that a strategy with actually exists. Therefore, we define the maximal and minimal possible transition probabilities and that can occur in any induced MC of MDP :
Pmax(s,s′)=max{\probmdpr(s,\act)(s′)∣\act∈\Act} (13)
for all ; is defined analogously. Now, the repair is performed such that in the resulting MC for all it holds that
Pmin(s,s′)≤P(s,s′)≤Pmax(s,s′) . (14)
While obtaining , model checking needs to be performed intermediately to check if the specifications are satisfied; once they are, the algorithm terminates. In fact, for each state , the probability of satisfaction is computed. We assign variables for all with exactly this probability:
mcs=\pr(s⊨φ1,…,φn) . (15)
Now recall the NLP from the previous section, in particular Equation 8 which is the only nonlinear equation of the program. We replace each variable by the concrete model checking result for each :
mcs=∑\act∈\Actσs,\actha⋅∑s′∈S\probmdp(s,\act)(s′)⋅mcs′ . (16)
As (16) is affine in the variables , the program resulting from replacing (8) by (16) is a linear program (LP). Moreover, (2) and (3) can be removed, reducing the number of constraints and variables. The LP gives a feasible solution to the shared control synthesis problem. [Correctness] The LP is sound in the sense that each minimizing assignment induces a solution to the shared control problem. The correctness is given by construction, as the specifications are satisfied for the blended strategy which is derived from the repaired MC. However, the minimal deviation from the human strategy as in Equation 1 is dependent on the previous computation of probabilities for the blended strategy. Therefore, we actually compute an upper bound on the optimal solution. Let be the minimal deviation possible for any given problem and be the minimal deviation obtained by the LP resulting from replacing (8) by (16). Let and denote the infinity norms of both perturbations. For the perturbations and of it holds that . As we mentioned before, the local repair method can employ a bound on the maximal change of probabilities or cost in the model. If a repair exists for a given , the resulting deviation is then bounded by this .
## Vi Case study and experiments
Defining a formal synthesis approach to the shared control scenario requires a precomputed estimation of a human user’s intentions. As explained in the previous chapter, we account for inherent uncertainties by using a
randomized strategy over possible actions to take. We discuss how such strategies may be obtained and report on benchmark results.
### Vi-a Experimental setting
Our setting is the wheelchair scenario from Example IV inside an interactive Python environment. The size of the grid is variable and an arbitrary number of stationary and randomly moving obstacles (the vacuum cleaner) can be defined. An agent (the wheelchair) is moved according to predefined (randomized) strategies or interactively by a human user.
From this scenario, an MDP with states corresponding to the position of the agent and the obstacles is generated. Actions induce position changes of the agent. The safety specification ensures that the agent reaches a target cell without crashing into an obstacle with a certain high probability , formally . We use the probabilistic model checker PRISM [15] for verification, in form of either a worst–case analysis for each possible strategy or concretely for a specific strategy. The whole toolchain integrates the simulation environment with the approaches described in the previous sections. We use the NLP solver IPOPT [4] and the LP solver Gurobi [12]. To perform model repair for strategies, see Section V, we implemented the greedy method from [17] into our framework augmented by side constraints ensuring well-defined strategies.
### Vi-B Data collection
We ask five participants to perform tests in the environment with the goal to move the agent to a target cell while never being in same cell as the moving obstacle. From the data obtained from each participant, an individual randomized human strategy for this participant can be obtained via Maximum Entropy Inverse Reinforcement Learning (MEIRL) [22]. Inverse reinforcement learning has—for instance—also been used in [14] to collect data about human behavior in a shared control scenario (though without any formal guarantees) or in [18] to distinguish human intents with respect to different tasks. In our setting, each sample is one particular command of the participant, while we have to assume that command is actually made with the intent to satisfy the specification of safely reaching a target cell. For the resulting strategy, the probability of a possible deviation from the actual intend can be bounded with respect to the number of samples using Hoeffding’s inequality, see [21] for details. On the other hand, we can determine the number of samples needed to get a reasonable approximation of typical behavior.
The concrete probabilities of possible deviation depend on , where is the number of samples and is the desired upper bound on the deviation between the true probability of satisfying the specification and the average obtained by the sampled data. Here, in order to ensure an upper bound with probability , the required amount of samples is .
### Vi-C Experiments
The work flow of the experiments is depicted in Figure 4. First off, we discuss sample data for one particular participant using a grid with one moving obstacle inducing an MDP of states. In the synthesis, we employ the model repair procedure as explained in Section V because the approach based on NLP is only feasible for very small examples. We design the blending function as follows: At states where the human strategy induces a high probability of crashing, we put low confidence in the human and vice versa. Using this function, the autonomous strategy is created and passed (together with the function) back to the environment. Note that the blended strategy is ensured to satisfy the specification, see Lemma V. Now, we let the same participant as before do test runs, but this time we blend the human commands with the (randomized) commands of the autonomous strategy . Then the actual action of the agent is determined stochastically. We obtain the following results. Our safety specification is instantiated with , ensuring that the target is safely reached with at least probability . The human strategy has probability , violating the specification. With the aforementioned blending function we compute which induces probability . Blending these two strategies into yields a probability of . When testing the synthesized autonomy protocol for the individual participant, we observe that his choices are mostly corrected if intentionally bad decisions are made. Also, simulating the blended strategy leeds to the expected result that the agent does not crash in roughly of the cases.
To make the behavior of the strategies more accessible, consider Figure 5. For each , , and we indicate for each cell of the grid the worst-case probability to safely reach the target. This probability depends on the current position of the obstacle, which is again probabilistic. The darker the color, the higher the probability; thereby black indicates a probability of to reach the target. We observe that the human’s decisions are rather risky even near the target, while for the blended strategy—once the agent is near the target—there is a very high probability of reaching it safely. This representation also shows that with our approach the blended strategy improves the human strategy while not changing it too much. Specifically, the maximal deviation from the human strategy is , which is the result of the infinity norm as in Equation 1.
To finally assess the scalability of our approach, consider Table II. We generated MDPs for several grid sizes, number of obstacles, and human strategies. We list the number of reachable MDP states (states) and the number of transitions (trans.). We report on the time the synthesis process took (synth.), which is basically the time of solving the LP, and the total time including model checking times using PRISM (total) measured in seconds. To give an indication on the quality of the synthesis, we list the deviation from the human strategy (). A memory out is indicated by “–MO–”. All experiments were conducted on a 2.3GHz machine with 8GB of RAM. Note that MDPs resulting from grid structures are very strongly connected, resulting in a large number of transitions. Thus, the encoding in the PRISM-language [15] is very large, rendering it a very hard problem. We observe that while the procedure is very efficient for models having a few thousand states and hundreds of thousands of transitions, its scalability is ultimately limited due to memory issues. In the future, we will utilize efficient symbolic data structures internal to PRISM. Moreover, we observe that for larger benchmarks the computation time is governed by the solving time of .
## Vii Conclusion
We introduced a formal approach to synthesize autonomy protocols in a shared control setting with guarantees on quantitative safety and performance specifications. The practical usability of our approach was shown by means of data-based experiments. Future work will concern experiments in robotic scenarios and further improvement of the scalability.
## References
• [1] Pieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. In
Proceedings of the twenty-first international conference on Machine learning
, page 1. ACM, 2004.
• [2] Christel Baier and Joost-Pieter Katoen. Principles of Model Checking. The MIT Press, 2008.
• [3] Ezio Bartocci, Radu Grosu, Panagiotis Katsaros, CR Ramakrishnan, and Scott A Smolka. Model repair for probabilistic systems. In TACAS, volume 6605 of LNCS, pages 326–340. Springer, 2011.
• [4] Lorenz T. Biegler and Victor M. Zavala. Large-scale nonlinear programming using IPOPT: An integrating framework for enterprise-wide dynamic optimization. Computers & Chemical Engineering, 33(3):575–582, 2009.
• [5] Taolue Chen, Yuan Feng, David S. Rosenblum, and Guoxin Su. Perturbation analysis in verification of discrete-time Markov chains. In CONCUR, volume 8704 of LNCS, pages 218–233. Springer, 2014.
• [6] Taolue Chen, Ernst Moritz Hahn, Tingting Han, Marta Kwiatkowska, Hongyang Qu, and Lijun Zhang. Model repair for Markov decision processes. In TASE, pages 85–92. IEEE CS, 2013.
• [7] Anca D. Dragan and Siddhartha S. Srinivasa. Formalizing assistive teleoperation. In Robotics: Science and Systems, 2012.
• [8] Anca D. Dragan and Siddhartha S. Srinivasa. A policy-blending formalism for shared control. I. J. Robotic Res., 32(7):790–805, 2013.
• [9] Kousha Etessami, Marta Z. Kwiatkowska, Moshe Y. Vardi, and Mihalis Yannakakis. Multi-objective model checking of Markov decision processes. Logical Methods in Computer Science, 4(4), 2008.
• [10] Jie Fu and Ufuk Topcu. Synthesis of shared autonomy policies with temporal logic specifications. IEEE Trans. Automation Science and Engineering, 13(1):7–17, 2016.
• [11] F. Galán, M. Nuttin, E. Lew, P. W. Ferrez, G. Vanacker, J. Philips, and J. del R. Millán. A brain-actuated wheelchair: Asynchronous and non-invasive brain-computer interfaces for continuous control of robots. Clinical Neurophysiology, 119(9):2159–2169, 2016/05/28.
• [12] Gurobi Optimization, Inc. Gurobi optimizer reference manual. http://www.gurobi.com, 2013.
• [13] Iñaki Iturrate, Jason Omedes, and Luis Montesano. Shared control of a robot using eeg-based feedback signals. In Proceedings of the 2Nd Workshop on Machine Learning for Interactive Systems: Bridging the Gap Between Perception, Action and Communication, MLIS ’13, pages 45–50, New York, NY, USA, 2013. ACM.
• [14] Shervin Javdani, J Andrew Bagnell, and Siddhartha Srinivasa. Shared autonomy via hindsight optimization. In Proceedings of Robotics: Science and Systems, 2015.
• [15] Marta Kwiatkowska, Gethin Norman, and David Parker. Prism 4.0: Verification of probabilistic real-time systems. In CAV, volume 6806 of LNCS, pages 585–591. Springer, 2011.
• [16] Andrew Y Ng, Stuart J Russell, et al. Algorithms for inverse reinforcement learning. In Icml, pages 663–670, 2000.
• [17] Shashank Pathak, Erika Ábrahám, Nils Jansen, Armando Tacchella, and Joost-Pieter Katoen. A greedy approach for the efficient repair of stochastic models. In NFM, volume 9058 of Lecture Notes in Computer Science, pages 295–309. Springer, 2015.
• [18] Constantin A Rothkopf and Dana H Ballard. Modular inverse reinforcement learning for visuomotor behavior. Biological cybernetics, 107(4):477–490, 2013.
• [19] Pete Trautman. Assistive planning in complex, dynamic environments: a probabilistic approach. CoRR, abs/1506.06784, 2015.
• [20] Pete Trautman. A unified approach to 3 basic challenges in shared autonomy. CoRR, abs/1508.01545, 2015.
• [21] Brian D Ziebart. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. 2010.
• [22] Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. Maximum entropy inverse reinforcement learning. 2008.
|
2021-04-23 07:44:01
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8419857025146484, "perplexity": 999.3983498539684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039568689.89/warc/CC-MAIN-20210423070953-20210423100953-00177.warc.gz"}
|
https://socratic.org/questions/how-do-you-multiply-sqrt81-sqrt36#638218
|
# How do you multiply sqrt81*sqrt36?
Jul 3, 2018
$\pm 54$
#### Explanation:
Given: $\sqrt{81} \cdot \sqrt{36}$
Use the fact that $\sqrt{a} \cdot \sqrt{b} = \sqrt{a b} \setminus \forall a , b \ge 0$.
$= \sqrt{81 \cdot 36}$
$= \sqrt{2916}$
$= \sqrt{{54}^{2}}$
Now, use the fact that $\sqrt{{a}^{2}} = \pm a$.
$= \pm 54$
Jul 3, 2018
$\sqrt{81} \cdot \sqrt{36} = \pm 54$
#### Explanation:
It can be done either of 2 ways: take square root first, then multiply
or multiply first, then take square root.
$\sqrt{81} \cdot \sqrt{36} = \pm 9 \cdot \pm 6 = \pm 54$
$\sqrt{81} \cdot \sqrt{36} = \sqrt{81 \cdot 36} = \sqrt{2916} = \pm 54$
I hope this helps,
Steve
|
2022-08-13 03:52:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9969556927680969, "perplexity": 5859.535461515028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571869.23/warc/CC-MAIN-20220813021048-20220813051048-00691.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/other-math/basic-college-mathematics-9th-edition/chapter-9-basic-algebra-9-6-solving-equations-9-6-exercises-page-670/5
|
## Basic College Mathematics (9th Edition)
Published by Pearson
# Chapter 9 - Basic Algebra - 9.6 Solving Equations - 9.6 Exercises: 5
No.
#### Work Step by Step
We can plug in $z=-8$ to the equation to see if it is a solution. $2z-1=2(-8)-1=2\times-8-1=-16-1=-17\ne-15$ Therefore, $z=-8$ is not a solution to the given equation.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2018-04-24 03:30:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7100469470024109, "perplexity": 909.1957583395958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946453.89/warc/CC-MAIN-20180424022317-20180424042317-00214.warc.gz"}
|
http://math.stackexchange.com/questions/511855/simple-clarification-operatornamehom-mathsfsetx-y
|
# Simple clarification - $\operatorname{Hom}_{\mathsf{Set}}(X,Y)$
I'm currently working through David Spivak's Category Theory for Scientists, and I'd just like to verify that I am understanding $\def\homset{\operatorname{Hom}_{\mathsf{Set}}}\homset(X,Y)$ correctly. My (informal) understanding that is that it denotes the set of all the different functions from $X \rightarrow Y$. Thus, if we let $A = \{1,2,3,4,5\}$ and $B = \{x,y\}$, we have the following answers to these questions:
a) How many elements does $\homset(A,B)$ have?
• 32 since each element in $A$ can map to one of two elements in $B$. Thus, we have $2^5 = 32$.
b) Find a set A such that for all sets $X$ there is exactly one element in $\homset(X, A)$.
• If there is exactly on element in the hom-set, this means we can only have one function from $X$ to $Y$. Thus, $A$ can be any set containing only one element.
c) Find a set $B$ such that for all sets $X$ there is exactly one element in $\homset(B, X)$.
• This is the one that I'm stuck on and made me think that perhaps I'm misunderstanding the definition given, because by what I'm given, such a set can't exist.
Could someone please confirm my thinking or clarify what I might be misunderstanding? Thanks!
-
Don't forget your friend the empty set! – Trevor Wilson Oct 2 '13 at 4:54
I was actually thinking about that. Wasn't sure that a function could map the empty set to something, though I suppose on hind thought that the definition definitely would allow it. Thanks! – pomegranate Oct 2 '13 at 5:08
@promegranate The function (you wasn't sure about) from $\emptyset$ to any $X$ is just the injection $\emptyset \subseteq X$. – Pece Oct 2 '13 at 10:10
Maps from B to X can be represented in category theory as $X^{B}$.
|
2014-08-22 00:27:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.908279299736023, "perplexity": 251.7374761536057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500822053.47/warc/CC-MAIN-20140820021342-00331-ip-10-180-136-8.ec2.internal.warc.gz"}
|
http://openstudy.com/updates/4f15a287e4b0737c567bbacb
|
## anonymous 5 years ago Can someone explain me the steps of solving integrals using partial fractions?
1. anonymous
Should i give you am example?
2. anonymous
the integration is usually the easy part. finding the decomposition is a pain often
3. anonymous
ya i don't get the steps. my book didnt tell me
4. anonymous
|dw:1326817894193:dw|
5. anonymous
$\frac{1}{(x-2)(x-3)}=\frac{a}{x-2}+\frac{b}{x-3}$ and you need a and b. we know that $a(x-3)+b(x-2)=1$ so this is an easy one. since this equality has to be true for all values of x, it must be true if x = 2, so you get $a(2-3)+b(2-2)=1$ $-a=1$ $a=-1$
6. anonymous
lol they got a diff answer is that ok?
7. anonymous
How about using heavy side cover up for this one?
8. anonymous
repeat the process making x = 3, and you will find $b=1$ so we have $\frac{1}{(x-2)(x-3)}=\frac{-1}{x-2}+\frac{1}{x-3}$ and now integrate term by term. now it is not ok
9. anonymous
*no
10. anonymous
They got 1/3 and -1/3
11. anonymous
then maybe i made a mistake, let me write it out
12. TuringTest
is that a 5 on the second set of parentheses?
13. TuringTest
sat used a 3
14. anonymous
doh, i used 3!!
15. anonymous
LOL ya sorry abt that
16. anonymous
ohhh didnt even notce that sat
17. TuringTest
sat has done it correctly for sure, just with the wrong number
18. anonymous
no that is my fault, but idea is clear right?
19. anonymous
$a(x-5)+b(x-2)=1$ let $x=2$ get $-3a=1,x=-\frac{1}{3}$
20. anonymous
ummmm so y did u chose the value 2?
21. anonymous
i am going to wager you can figure out why i picked 2
22. anonymous
and what i will pick next also
23. anonymous
need a hint?
24. anonymous
well cuz it is in the denomanator? lol
25. anonymous
i have this equality $a(x-5)+b(x-2)=1$ and i am looking for a and b. so how can i find a easily?
26. anonymous
by eliminating one variable by making one part =0
27. anonymous
K i got that LOL
28. anonymous
zactly. so first we let x =2, and then we let x = 5
29. anonymous
oh ok got that :D Thanks :D
30. anonymous
yw
31. anonymous
I thought u guys started teaching today
32. anonymous
This one a is a simple one if I have a more difficult one I will be back :D
33. anonymous
with some practice you can do this: write $\frac{1}{(x-2)(x-5)}=\frac{a}{x-2}+\frac{b}{x-5}$ now to find a, put your finger over the x - 2 in the first expression, put x = 2 and get $a=\frac{1}{2-3}=-\frac{1}{3}$
34. myininaya
its called office hours
35. anonymous
Thanks sat :D
36. anonymous
i really need to get back to work, but i have 30 emails to send out and got bored bored bored. later
37. anonymous
LOL bye
|
2017-01-23 05:10:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8652967810630798, "perplexity": 2055.5753981786584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00560-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://followtutorials.com/2011/09/numerical-method-solution-of-ordinary-differential-equation-using-rk4-method-in-c.html
|
# Numerical method : Solution of ordinary differential equation using RK4 method in C
Algorithm:
1. Start
2. Declare and Initialize necessary variable likes K1, K2, K3, K4 etc.
3. Declare and define the function that returns the functional value.
4. Get the initial value of x, initial value of y, no. of iteration and interval from user.
5. for i = 0 to i < n go to step 6.
6. Do the following calculation: and go to step 7
7. The new values of x and y are: x = x + interval, y = y + K;
8. print the value of x and y;
9. stop
Flowchart:
Source Code:
#include <stdio.h>
/******************************************************
Program: solution of ordinary differential equation
Language : C
Author: Bibek Subedi
Tribhuvan University, Nepal
********************************************************/
float func(float x, float y){
return (y*y-x*x)/(y*y+x*x);
}
int main(){
float K, K1, K2, K3, K4;
float x0 , y0, x, y;
int j, n; float i, H;
printf("Enter initial value of x: ");
scanf("%f", &x0);
printf("Enter initial value of y: ");
scanf("%f", &y0);
printf("Enter no iteration: ");
scanf("%d", &n);
printf("Enter the interval: ");
scanf("%f", &H);
x = x0;
y = y0;
for(i = x+H, j = 0; j < n; i += H, j++){
K1 = H * func(x , y);
K2 = H * func(x+H/2, y+K1/2);
K3 = H * func(x+H/2, y+K2/2);
K4 = H * func(x+H, y+K3);
K = (K1 + 2*K2 + 2*K3 + K4)/6;
x = i;
y = y + K;
printf("At x = %.2f, y = %.4f ", x, y);
printf("\n");
}
return 0;
}
Output
SHARE Numerical method : Solution of ordinary differential equation using RK4 method in C
|
2023-03-30 11:57:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29192259907722473, "perplexity": 4926.665408870767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949181.44/warc/CC-MAIN-20230330101355-20230330131355-00619.warc.gz"}
|
https://fridgephysics.com/tag/ocr-combined-science-a-paper-6/
|
# ocr-combined-science-a-paper-6
## Wave Speed
Wave speed is given in meters per second (the number of waves that pass per second). Wavelength is measured in meters and frequency is measured in hertz (Hz), or number of waves per second.
## Demo
In this tutorial you will learn how to calculate the speed of a wave.
The equation for this calculation is written like this:
$v = { \text f \; \text x \; \lambda }$
## Chilled practice question
Calculate the velocity of a wave with a wavelength of 6 m and a frequency of 50 Hz
## Frozen practice question
Find the velocity of a wave which has a time period of 10 s and a wavelength of 24 m, you will need to calculate the frequency from the wave period equation first.
## Science in context
Wave speed is given in meters per second (the number of waves that pass per second). Wave Speed = FrequencyWave length. Wavelength is measured in meters and frequency is measured in hertz (Hz), or number of waves per second.
## Millie’s Master Methods
#### Millie’s Magic Triangle
The ability to rearrange equations is the first step to successfully solve Physics calculations. Millie’s…
#### Calculation Master Method
Performing and mastering this routine will guarantee you maximum marks when solving Physics calculations. Calculation…
## The Fridge Physics Store
#### Teacher Fast Feedback
Feedback to students in seconds – Voice to label thermal bluetooth technology…
#### Get Fridge Physics Merch
Why not buy a Fridge Physics baseball cap, woollen beanie, hoodie or polo shirt, all colours and sizes available. Free delivery to anywhere in the UK!…
## Efficiency
The efficiency of a device is the proportion of input energy that is converted to useful energy.
## What is Efficiency?
The efficiency of a device is the proportion of input energy that is converted to useful energy. Efficiency is a measure of how much work or energy is conserved in an energy transfer, work or energy can be lost, for example as wasted heat energy. The efficiency is the useful energy output, divided by the total energy input, and can be given as a decimal always less than 1 or a percentage. No machine is 100% efficient.
## Efficiency equation
To calculate Efficiency we use this equation.
$efficiency = {\text{useful energy output} \over\text{total energy input}}$
## Efficiency demo
In this tutorial you will learn how to calculate how efficient a device is at transferring energy from one form to another.
## Chilled practice question
Calculate the efficiency of a light bulb with a total input energy of 500 J. The bulb emits 200 J of light energy and 300 J of heat energy.
## Frozen practice question
A tumble drier is 80% efficient. Its useful energy is 45 KJ what is the total input energy in Joules.
## Science in context
The efficiency of a device is the proportion of the total input energy that is converted to useful energy.
## Millie’s Master Methods
#### Millie’s Magic Triangle
The ability to rearrange equations is the first step to successfully solve Physics calculations. Millie’s…
#### Calculation Master Method
Performing and mastering this routine will guarantee you maximum marks when solving Physics calculations. Calculation…
## The Fridge Physics Store
#### Teacher Fast Feedback
Feedback to students in seconds – Voice to label thermal bluetooth technology…
#### Get Fridge Physics Merch
Why not buy a Fridge Physics baseball cap, woollen beanie, hoodie or polo shirt, all colours and sizes available. Free delivery to anywhere in the UK!…
## Transformers
transformer is an piece of electrical apparatus which will increase or decrease the voltage in an alternating current. It can be designed to “step up” or “step downvoltages and is based on the magnetic induction principle.
## Demo
In this tutorial you will learn how to calculate voltages in step up and step down transformers.
The equation is written like this:
${ \text V_{1} \; \text / \; \text V_{2} \; \text = \; \text N_{1} \; \text / \; \text N_{2} \;}$
## Chilled practice question
A transformer has 300 turns on its primary coil and 100 turns on it secondary coil. What voltage is induced in the secondary coil if the primary voltage is 96 V ?
## Frozen practice question
A transformer has 320 turns on the primary coil and 800 turns on the secondary coil. Find the voltage across the primary coil when the voltage across the secondary coil is 1320 V.
## Science in context
transformer is an piece of electrical apparatus which will increase or decrease the voltage in an alternating current. It can be designed to “step up” or “step downvoltages and is based on the magnetic induction principle.When a voltage is introduced to one coil, called the primary coil it magnetizes its core which is made from iron which induces a voltage in the secondary coil.
## Millie’s Master Methods
#### Millie’s Magic Triangle
The ability to rearrange equations is the first step to successfully solve Physics calculations. Millie’s…
#### Calculation Master Method
Performing and mastering this routine will guarantee you maximum marks when solving Physics calculations. Calculation…
## The Fridge Physics Store
#### Teacher Fast Feedback
Feedback to students in seconds – Voice to label thermal bluetooth technology…
#### Get Fridge Physics Merch
Why not buy a Fridge Physics baseball cap, woollen beanie, hoodie or polo shirt, all colours and sizes available. Free delivery to anywhere in the UK!…
## The Motor Effect
A current-carrying wire or coil can exert a force on a permanent magnet. The force increases if the strength of the magnetic field and/or current increases. This is called the motor effect.
## Demo
In this tutorial you will learn how to calculate the force on a wire carrying a current in a magnetic field.
The equation is written like this:
$F = { \text B \; \text x \; \text I \; \text x \; \text L}$
## Chilled practice question
Calculate the force on a wire carrying a current of 3 A. 0.25 m of the wire is in the magnetic field. The magnetic flux density is 15 T.
## Frozen practice question
The force on a wire in a magnetic field is 30 N. The magnetic flux density is 3 T and the length of wire in the magnetic field is 50 cm. Calculate the current.
## Science in context
A current-carrying wire or coil can exert a force on a permanent magnet.The force increases if the strength of the magnetic field and/or current increases. This is called the motor effect.
## Millie’s Master Methods
#### Millie’s Magic Triangle
The ability to rearrange equations is the first step to successfully solve Physics calculations. Millie’s…
#### Calculation Master Method
Performing and mastering this routine will guarantee you maximum marks when solving Physics calculations. Calculation…
## The Fridge Physics Store
#### Teacher Fast Feedback
Feedback to students in seconds – Voice to label thermal bluetooth technology…
#### Get Fridge Physics Merch
Why not buy a Fridge Physics baseball cap, woollen beanie, hoodie or polo shirt, all colours and sizes available. Free delivery to anywhere in the UK!…
Scroll to Top
|
2023-03-27 16:35:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3747503161430359, "perplexity": 2210.7531413319953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948673.1/warc/CC-MAIN-20230327154814-20230327184814-00169.warc.gz"}
|
http://www.solipsys.co.uk/cgi-bin/sews.py?StraightLine
|
# Straight Line
From the old English "stretched linen" suggesting its origins whereby a piece of string is stretched, chalked and flicked thereby leaving a shadow in chalk of the string in the form of a straight line.
The first two axioms (or postulates) of Euclidian geometry concern straight line segements.
1. A straight line segment can be drawn joining any two points.
2. Any straight line segment can be extended indefinitely in a straight line.
The equation of a straight line in cartesian coordinates is $ax+by=c$
A common transformation of this is $y=mx+c$
One of the Named Curves on this site.
|
2020-02-24 14:47:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3200554847717285, "perplexity": 597.306516407487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145960.92/warc/CC-MAIN-20200224132646-20200224162646-00507.warc.gz"}
|
http://cghhu.cn:81/pyflann-shi-yong-fang-fa/
|
# PyFlann 使用方法
PyFlann 其实是 FLANNpython 接口,当前支持python2 和 python3。FLANN 的意思是Fast Library for Approximate Nearest Neighbors,也就是快速解决最近点搜类问题的库。
# 安装
pip安装
pip install pyflann
git clone https://github.com/primetang/pyflann.git
cd pyflann
[sudo] python setup.py install
# 使用
pyflann 包 提供了一个名为 FLANN 的类,来负责执行最近点搜索这个具体的操作。这个类包含如下的函数。
## def build_index(self, pts, **kwargs)
pts 是数据集,必须是 numpy2D 数组或 matrix,用 row优先方式存储。
**kwargs 是一组不定的参数,首先包含一个参数 algorithm ,然后根据 algorithm 参数的不同,后续的参数也是不同的,总共有如下几种情况。
flann = pyflann.FLANN()
# 初始化 dataset
params = flann.build_index(dataset, algorithm = 'linear')
params = flann.build_index(dataset, algorithm = 'kdtree', trees)
params = flann.build_index(dataset, algorithm = 'autotuned',
target_precision, build_weight, memory_weight, sample_fraction)
params = flann.build_index(dataset, algorithm = "means", branching,
iterations, centers_init, cb_index)
params = flann.build_index(dataset, algorithm = "composite", tress, branching,
iterations, centers_init, cb_index)
### linear
Linear 算法并没有创建内部index,它是采用了暴力法求解,线性查找,因此无其他参数,且速度非常慢。
params = flann.build_index(dataset, algorithm = 'linear')
### autotuned
• build weight - speci es the importance of the index build time raported to the nearest-neighbor search time. In some applications it's acceptable for the index build step to take a long time if the subsequent searches in the index can be performed very fast. In other applications it's required that the index be build as fast as possible even if that leads to slightly longer search times. (Default value: 0.01)
• memory weight - is used to specify the tradeo between time (index build time and search time) and memory used by the index. A value less than 1 gives more importance to the time spent and a value greater than 1 gives more importance to the memory usage.
• sample fraction - is a number between 0 and 1 indicating what fraction of the dataset to use in the automatic parameter con guration algorithm. Running the algorithm on the full dataset gives the most accurate results, but for very large datasets can take longer than desired. In such case, using just a fraction of the data helps speeding up this algorithm, while still giving good approximations of the optimum parameters.
from pyflann import *
from numpy import *
from numpy.random import *
dataset = rand(10000, 128)
testset = rand(1000, 128)
flann = FLANN()
params = flann.build_index(dataset, algorithm="autotuned", target_precision=0.9, log_level = "info");
print params
result, dists = flann.nn_index(testset,5, checks=params["checks"]);
### kd tree
k-d tree,是一种分割k维数据空间的数据结构。主要应用于多维空间关键数据的搜索(如:范围搜索和最近邻搜索)。K-D树是二进制空间分割树的特殊的情况。
params = flann.build_index(dataset, algorithm = 'kdtree', trees=4)
### kmeans
Hierarchical k-means 算法。需要输入以下参数:
• branching - the branching factor to use for the hierarchical kmeans tree creation. While kdtree is always a binary tree, each node in the kmeans tree may have several branches depending on the value of this parameter.
• iterations - the maximum number of iterations to use in the kmeans clustering stage when building the kmeans tree. A value of -1 used here means that the kmeans clustering should be performed until convergence.
• centers_init - the algorithm to use for selecting the initial centers when performing a kmeans clustering step. The possible values are 'random' (picks the initial cluster centers randomly), 'gonzales' (picks the initial centers using the Gonzales algorithm) and 'kmeanspp' (picks the initial centers using the algorithm suggested in [AV07]). If this parameters is omitted, the default value is 'random'.
• cb_index - this parameter (cluster boundary index) in uences the way exploration is performed in the hierarchical kmeans tree. When cb index is zero the next kmeans domain to be explored is choosen to be the one with the closest center. A value greater then zero also takes into account the size of the domain.
## def nnindex(self, qpts, numneighbors = 1, **kwargs)
• qpts: 待查询的 testset,维度要与之前建立时使用的数据集一样。比如建立的数据集是 1000 x 3的矩阵,查询的数据集必须为 n x 3 的矩阵,否则无法进行查询。
• num_neighbors: 查询最近的几个点,根据这个值决定返回的数值。如果 testset 的为 1000 x 3 矩阵,查询最近的五个节点,则返回 1000 x 5的矩阵,如果查询最近的一个节点,则返回 1000 x 1 的数组。
• kwargs: checks=["checks"]
## def nn(self, pts, qpts, num_neighbors = 1, **kwargs)
from pyflann import *
import numpy as np
dataset = np.array(
[[1., 1, 1, 2, 3],
[10, 10, 10, 3, 2],
[100, 100, 2, 30, 1]
])
testset = np.array(
[[1., 1, 1, 1, 1],
[90, 90, 10, 10, 1]
])
flann = FLANN()
result, dists = flann.nn(
dataset, testset, 2, algorithm="kmeans", branching=32, iterations=7, checks=16)
print result
print dists
## def setdistancetype(distance type, order = 0)
• type - the distance type to use. Possible values are: 'euclidean', 'manhattan', 'minkowski', 'max dist' (L infinity - distance type is not valid for kd-tree index type since it's not dimensionwise additive), 'hik' (histogram intersection kernel), 'hellinger','cs' (chi-square) and 'kl' (Kullback-Leibler).
• order - only used if distance type is 'minkowski' and represents the order of the minkowski distance.
# Reference
### Subscribe to CG-HHU
Get the latest posts delivered right to your inbox.
or subscribe via RSS with Feedly!
|
2018-08-15 02:50:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29988425970077515, "perplexity": 8987.277562108757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209856.3/warc/CC-MAIN-20180815024253-20180815044253-00160.warc.gz"}
|
http://burubaxair.wordpress.com/
|
## Lagrange polynomial interpolation in Scilab
Assume we have data (xi, yi), i = 1, …, n.
The data don’t have to be equally spaced.
We can pass a Lagrange polynomial P(x) of degree n−1 through these data points.
function y = lagrange_interp(x,data)
for i = 1:length(x)
y(i) = P(x(i),data);
end
endfunction
The polynomial P(x) is a linear combination of polynomials Li(x), where each Li(x) is of degree n−1
P(x) = y1L1(x) + y2L2(x) + ... + ynLn(x)
The polynomials Li(x) are defined as
Li(x) = (xx1) ... (xxi-1)(xxi+1) ... (xxn) / αi
where
αi = (xix1) ... (xixi-1)(xixi+1) ... (xixn)
It is obvious that Li(xj) = δij and therefore P(xi) = yi meaning that the polynomial P passes through all the data (xi, yi), i = 1, ..., n.
If the the data (xi, yi) are organized in an n×2 matrix, then the polynomial P can be computed as follows
function y = P(x,data)
n = size(data,1);
xi = data(:,1);
yi = data(:,2);
L = cprod_e(xi,x) ./ cprod_i(xi);
y = yi' * L;
endfunction
where the function cprod_e calculates the numerators of Li(x) for each i, namely all the products
(xx1) ... (xxi-1)(xxi+1) ... (xxn)
function y = cprod_e(x,a)
n = length(x);
y(1) = prod(a-x(2:$)); for i = 2:n-1 y(i) = prod(a-x(1:i-1))*prod(a-x(i+1:$));
end
y(n) = prod(a-x(1:$-1)); endfunction The function cprod_i calculates all αi for i = 1, ..., n. function y=cprod_i(x) n = length(x); y(1) = prod(x(1)-x(2:$));
for i = 2:n-1
y(i) = prod(x(i)-x(1:i-1))*prod(x(i)-x(i+1:$)); end y(n) = prod(x(n)-x(1:$-1));
endfunction
Now, let's test our code.
I take two examples from the book "Fundamentals of Engineering Numerical Analysis" by Prof. Parviz Moin.
Both examples use data obtained from the Runge’s function
y = 1/(1+25 x2)
The data in the first example are equally spaced:
data1=[...
-1.0 0.038;
-0.8 0.058;
-0.6 0.1;
-0.4 0.2;
-0.2 0.5;
0.0 1.0;
0.2 0.5;
0.4 0.2;
0.6 0.1;
0.8 0.058;
1.0 0.038];
The commands to perform Lagrange interpolation are:
x = linspace(-1,1,1000);
y = lagrange_interp(x,data1);
Here's the plot of the data (red circles), the Lagrange polynomial (solid line) and the Runge’s function (dashed line):
plot(x,y,'-b',...
x, 1.0 ./ (1+25.*x.^2),'--g',...
data1(:,1),data1(:,2),'or')
The second example takes non-equally spaced data:
data2 = [...
-1.0 0.038;
-0.95 0.042;
-0.81 0.058;
-0.59 0.104;
-0.31 0.295;
0.0 1.0;
0.31 0.295;
0.59 0.104;
0.81 0.058;
0.95 0.042;
1.0 0.038];
Interpolating and plotting them as we did in the previous example produces the following picture.
y = lagrange_interp(x,data2);
plot(x,y,'-b',...
x, 1.0 ./ (1+25.*x.^2),'--g',...
data2(:,1),data2(:,2),'or')
Tags:
We did it with this kit: link
A video from the kit manufacturer’s site:
## Digital signal processing in Scilab and Xcos. Part 3: Fourier Analysis
Previous parts: part 1, part 2.
Scilab is optimized for matrix calculations, so I will perform Fourier analysis here in the language of matrices and vectors.
For theoretical details about the matrix treatment of Fourier transform, please check Chapter 4 of an excellent book, Signal Processing for Communications, by P.Prandoni and M. Vetterli, 2008, EPFL Press.
Consider finite length complex-valued signals
N−1 N−1 x[n] = ∑ xk δk[n] = ∑ xk δ[n−k] k=0 k=0
where δ[nk] are the discrete-time impulses:
δ[n] = 1 when n = 0 and δ[n] = 0 when n ≠ 0.
The set of discrete-time impulses δk[n] = δ[n-k], k = 0, …, N−1 forms an orthonormal basis in the space of discrete signals of length N.
The Fourier transform represents a signal as a superposition of harmonic oscillations of different frequencies.
In other words, Fourier transform of the signal x[n] is its representation in another basis, namely, in the basis formed by the set of all harmonic oscillators of length N
N−1 x[n] = N −1 ∑ Xk ψk[n] k=0
where ψk[n] = exp (jnk/N) = (ψ1[1])nk
The vectors ψk[n] form an orthogonal basis in C N
⟨ψk, ψh⟩ = N when k = h and ⟨ψk, ψh⟩ = 0 when kh
Any signal that neither explodes nor disappears has to be composed of periodic signals.
The Fourier coefficients are
N−1 Xk = ⟨ψk, x⟩ = ∑ e −j 2πnk/N x[n] n=0
In matrix form, X = Ψx and x = ΨX/N, where Ψnk = (ψ1[1])nk
X0/N is the average value of the signal x[n].
The orthonormal basis vectors ψ′k[n] = N −1/2 ψk[n] are less convenient here because
ψ′k[n] ≠ (ψ′1[1])nk
therefore, if we used them, we wouldn’t be able to construct the matrix Ψ in such a beautiful way as we did with the non-orthonormal basis vectors ψk
A Scilab implementation of the matrix Ψ:
function [res] = psi(N)
psi11 = exp(%i*2*%pi/N);
res(1,1:N) = ones(1,N);
for j = 1 : N
res(2,j) = psi11^(j-1);
end
for i = 3 : N
res(i,1:N) = res(2,1:N).^(i-1)
end
endfunction
The k-th row (or the k-th column) of Ψ is the basis vector ψk
The following picture shows the real and imaginary parts of the basis vectors ψk with N = 64 (click on the picture to enlarge):
The Scilab code for a function that finds the Fourier coefficients:
function [res] = F(x)
N = length(x);
res = psi(N)' * x;
endfunction
Let's choose N = 64
N = 64;
n=[0:1:N-1]';
Then, for any signal x, it's Fourier coefficients are found as
X = F(x);
I also set those coefficients that are very small to zero, otherwise they will look ugly in pictures
eps = 1e-8;
realzero = find(abs(real(X))<eps);
X(realzero) = %i*imag(X(realzero));
imagzero = find(abs(imag(X))<eps);
X(imagzero) = real(X(imagzero));
abszero = find(abs(X)<eps);
X(abszero) = 0;
The real parts of the Fourier coefficients are obtained with the function real(X), their imaginary parts -- imag(X), and their amplitudes -- abs(X).
The phases of the Fourier coefficients are obtained as follows
// phase
ph = zeros(X);
nonzero = find(abs(real(X))>eps & abs(imag(X))>eps);
if length(nonzero) > 0 then
ph(nonzero) = atan(imag(X(nonzero)),real(X(nonzero)));
end
I put the phase to zero if either the real or imaginary part of X is very small.
The following animation shows the Fourier transforms of the basis vectors δ[nk] (click on the picture to enlarge):
We see that the Fourier coefficients of δ[nk] are equal to the components of the vectors ψ*k
Conversely, the Fourier coefficients of ψk will be the components of the vectors N δ[nk]
Now, let's study Fourier transform of a finite length input.
A finite length signal can be implemented by the following Scilab function:
function [res] = fin(n,N)
res = zeros(N,1);
N1=N/2-n/2+1;
res(N1:N1+n-1)=1;
endfunction
The animation below shows how the Fourier coefficients change as the length of the signal increases from 1 to 64.
From this animation we see that the longer the signal is in the time domain, the narrower it is in the frequency domain; and vice versa -- the shorter the signal is in the time domain, the wider it is in the frequency domain.
A single impulse (of length 1) contains all frequencies in its spectrum while a constant signal x[n] = 1 has only one frequency (equal to zero).
Tags:
## Digital signal processing in Scilab and Xcos. Part 2: Playing sound files
In my previous post on DSP in Scilab, I gave an example on how to open a wav file and make it available for further processing.
Today we’ll discuss the opposite task: how to generate signals and store them into wav files.
Open the Scilab’s editor
editor
Choose the sampling rate, the number of bits per sample, and the time duration for your signals
Fs = 11025; // samples per second
bits = 16; // bits per sample
t_total = 10; // seconds
The total number of samples in the signal:
n_samples = Fs * t_total;
Create an array of time points at which the samples will be synthesized:
t = linspace(0, t_total, n_samples);
The following code will generate a 440 Hz sine wave, and save it into a wav file sin440.wav
f=440; // sound frequency
// Sine wave
sin_wave = sin(2*%pi*f*t);
wavwrite(sin_wave, Fs, bits, sin_file);
Let's now generate a sawtooth wave of the same frequency and output it into another wav file:
// Sawtooth wave
saw_wave=2*(f*t-floor(0.5+f*t));
wavwrite(saw_wave, Fs, bits, saw_file);
Similarly, for a triangle and a square waves:
// Triangle wave
tri_wave=(2/%pi)*asin(sin(2*%pi*f*t));
wavwrite(tri_wave, Fs, bits, tri_file);
// Square wave
sq_wave=sign(sin(2*%pi*f*t));
wavwrite(sq_wave, Fs, bits, sq_file);
Let's simulate a guitar sound using the Karplus–Strong string synthesis method.
// Karplus-Strong
n_width=100;
ks=-1+2*rand(1,n_width,"uniform");
alpha=0.96;
while ( length(ks) < n_samples )
ks=[ks,alpha*ks($-n_width+1:$)];
end
ks=ks(1:n_samples);
We can play the generated sound in Scilab with
sound(ks)
and plot its waveform
plot(t,ks)
Next: part 3
|
2014-07-23 05:36:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7167782187461853, "perplexity": 3431.3615591133157}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997874283.19/warc/CC-MAIN-20140722025754-00241-ip-10-33-131-23.ec2.internal.warc.gz"}
|
http://shell.math.rutgers.edu/grad/gradcourses/descriptions.php?term=Fall_2012
|
Mathematics Department - Graduate Course Descriptions - Fall 2012
### Theory of Functions of a Real Variable I
Text: Measure and Integral, by R. Wheeden and A. Zygmund, Taylor and Francis publisher.
The book is not required, but I will follow it closely. A supplementary text is Real Analysis by E. Stein and R. Shakarchi in Princeton Lecture Notes in Analysis III.
Prerequisites: Advanced Calculus and Elementary Topology of Euclidean Space.
Description: Basic real variable function theory needed for pure and applied analysis. Topics: bounded variation; Riemann-Stieltjes integration; Lebesgue outermeasure and Lebesgue measure of sets in Euclidean space; nonmeasurable sets; measurable functions; Lusin and Egorov theorems; convergence in measure; Lebesgue integration; convergence theorems for integrals; relations between Lebesgue integrals, Riemann integrals and Riemann-Stieltjes integrals;; product measures and Fubini, Tonelli theorems; Vitali covering lemmas; Lebesgue differentiation theorem; Hardy-Littlewood maximal function; differentiation of functions of bounded variation, as time permits.
### Theory of Functions of a Complex Variable I
Text: Function Theory of One Complex Variable: Third Edition (Graduate Studies in Mathematics) *by Robert E. Greene and Steven G. Krantz
Publisher: American Mathematical Society; 3rd edition (March 29, 2006)
ISBN-13: 978-0821839621
Prerequisites: Acquaintance with analytic arguments at the level of Rudin's Principles of Modern Analysis is necessary. Some knowledge of algebra and point-set topology is useful.
Description:
The beginning of the study of one complex variable is certainly one of the loveliest mathematical subjects. It's the magnificent result of several centuries of investigation into what happens when R is replaced by C in "calculus". Among the consequences were the creation of numerous areas of modern pure and applied mathematics, and the clarification of many foundational issues in analysis and geometry. Gauss, Cauchy, Weierstrass, Riemann and others found this all intensely absorbing and wonderfully rewarding. The theorems and techniques developed in modern complex analysis are of great use in all parts of mathematics.
The course will be a rigorous introduction with examples and proofs foreshadowing modern connections of complex analysis with differential and algebraic geometry and partial differential equations. Acquaintance with analytic arguments at the level of Rudin's Principles of Modern Analysis is necessary. Some knowledge of algebra and point-set topology is useful.
The course will include some appropriate review of relevant topics, but this review will not be enough to educate the uninformed student adequately. A previous "undergraduate" course in complex analysis would also be useful though not necessary.
There are many excellent books about this subject. The official text will be Function Theory of One Complex Variable, by Greene and Krantz (American Math Society, 3rd edition, 2006). The course will cover most of Chapters 1 through 5 of the text, parts of Chapters 6 and 7, and possibly other topics. The titles of these chapters follow.
1: Fundamental Concepts; 2: Complex Lines Integrals; 3: Applications of the Cauchy Integral;4: Meromorphic functions and Residues; 5: The Zeros of a Holomorphic Function; 6: Holomorphic Functions as Geometric Mappings.
### Selected Topics in Analysis
Subtitle: Topological Methods in Nonlinear Analysis
Text: Recommended: Chapters 2, 4, 7 of Gilbarg-Trudinger (not the entire book) are recommended for this course. They can be photo-copied from an original copy (second edition). I will make them available to the students that do not have them. Brezis - Functional Analysis, Springer 2011, M.A.Krasnosel'skii - Topological Methods in the Theory of Nonlinear Integral Equations (International Series of Monographs in Pure and Applied Mathematics Vol. 45)
Prerequisites: Knowledge of Lebesgue integration theory and of earlier courses in Real and Functional Analysis is a prerequisite for this class.
Description: PLEASE NOTE: THIS COURSE HAS REPLACED COURSE 16:640:507. IF YOU REGISTERED FOR 16:640:507, YOU HAVE BEEN MOVED OVER TO 16:640:509.
This course should prepare students to set up the framework and learn some of the early and basic techniques in order to solve some nonlinear PDEs via topological methods.
1- The first part of the course covers Lp-spaces (convolution, dual spaces), Wm,p_ spaces, Rellich-Kondrachov embeddings. It also covers the resolution of the Dirichlet problem Δu = f ; u = 0|∂Ω via Green's function representation techniques (less general, but useful). References for this first part of the course are Gilbarg-Trudinger Chapters 2,4,7 (I have the second edition, Springer 1983).
2- The second part of the course covers Fredholm operators of index ν and 0 (self-adjoint operators) and Diagonalization of compact self-adjoint operators in separable Hilbert spaces.
There are a number of references on this very classical subject in the literature, including Brezis, Functional Analysis, Springer 2011, Riesz-Nagy, Analyse Fonctionnelle (Academie des Sciences de Hongrie) and others...
3-The third part of the course is mainly based on the old and very classical book of M.A.Krasnosels'kii's Topological Methods in the Theory of Nonlinear Integral Equations (International Series of Monographs in Pure and Applied Mathematics Vol 45; I have the copy of the library...). There are again a number of other references in the literature, including notes by L.Nirenberg at Courant, J.T Schwartz (also Courant Lecture Notes) and excellent papers by A.Ambrosetti and P.H Rabinowitz... This third part should cover:
• Caratheodory conditions and continuous maps from Lp(Ω) to Lq(Ω).
• Finite dimensional degree theory (here, we will use a differential geometry approach. One good reference, after assuming transversality and the Morse-Sard theorem, is provided in M.Hirsch's book Differential Topology, Graduate text in Maths 33, Springer 1997)
• Leray-Schauder degree
• Weakly continuous functionals on the unit sphere or the unit ball of a separable Hilbert space (Z2 and S1-Lusternik-Schnirelman theory)
I hope to be able to cover all the topics of this course. Whatever will be covered will be covered in detail. Knowledge of Lebesgue integration theory and of earlier courses in Real and Functional Analysis is a prerequisite for this class.
Required and recommended sources:
Chapters 2, 4, 7 of Gilbarg-Trudinger (not the entire book) are recommended for this course. They can be photo-copied from an original copy (second edition). I will make them available to the students that do not have them.
Brezis' book, cited above, is recommended, so is M.A.Krasnoselsk'ii's book cited above.
### Partial Differential Equations I
Text: Partial Differential Equations: Second Edition (Graduate Studies in Mathematics), By Lawrence C. Evans, American Mathematical Society; 2nd edition (March 3, 2010), ISBN-10: 0821849743.
Prerequisites: A strong background on advanced calculus involving multivariables (esp. Green's Theorem and Divergence Theorem). We will also use some basic facts of Lp function spaces and the usual integral inequalities (mostly completeness and Holder inequalities in L2 setting). These topics are covered in the first semester graduate real variable course (640:501).
Description: This is the first half of a year-long introductory graduate course on PDE. This introductory course should be useful for students with a variety of research interests: physics and mathematical physics, applied analysis, numerical analysis, differential geometry, complex analysis, and, of course, partial differential equations. This is the way the course will be conducted. The beginning weeks of the course aim to develop enough familarity and experience to the basic phenomena, approaches, and methods in solving initial/boundary value problems in the contexts of the classical prototype linear PDEs of constant coefficients: the Laplace equation, the D'Alembert wave equation, the heat equation and the Schroedinger equation. Next we will discuss first order nonlinear PDEs (e.g. characteristics, Hamilton-Jacobi equations), some ways to represent solutions (e.g. separation of variables, similarity solutions, Fourier and Laplace transforms, Hodograph and Legendre transform, singular perturbation and homogenizations, Cauchy-Kovalevskaya theorem). Then we discuss Soblev spaces and Soblev embedding theorems, second order elliptic equations of divergence form (existence, uniquesness, regularity), second order parabolic equations (existence, maximum principles, regularity).
### Harmonic Analysis on Euclidean Spaces
Text: None required
Prerequisites: Math 501, 502 and 503.
Description: The principal topics to be covered in this course are: 1. Interpolation Theorems, Stein's theorem, Marcinkiewicz interpolation theorems. 2. Hardy-Littlewood-Sobolev fractional Integration theorems. 3. Singular integrals, Calderon-Zygmund Theory. 4. Cotlar-Stein Lemma, Singular integrals of Non-convolution type. 5. Bounded Mean Oscillation, John-Niremberg theorem. 6. Littlewood-Paley theory 7. Fourier Transform restriction, Bochner-Riesz theory. 8. Strichartz estimates for wave and Schrodinger eqns. 9. Time permitting T1 theorems.
### Functions of Several Complex Variables I
Text: The course materials will be largely taken from the following:
[1] L. Hormander, {\it An introduction to complex analysis in several variables}, Third edition, North-Holland, 1990.
[2] James Morrow and K. Kodaira, {\it Complex Manifolds}, Rinehart and Winston, 1971.
[3] Xiaojun Huang, Lectures on the Local Equivalence Problems for Real Submanifolds in Complex Manifolds, Lecture Notes in Mathematics 1848 (C.I.M.E. Subseries), Springer-Verlag, 2004.
[4] Subelliptic analysis on Cauchy-Riemann manifolds, Lecture Notes on the national summer graduate school of China, 2007. (to appear)
Prerequisites: One complex variable and the basic Hilbert space theory from real analysis
Description:
A function with $n$ complex variables $z\in {\bf C}^n$ is said to be holomorphic if it can be locally expanded as power series in $z$. An even dimensional smooth manifold is called a complex manifold if the transition functions can be chosen as holomorphic functions. Roughly speaking, a Cauchy-Riemann manifold (or simply, a CR manifold) is a manifold that can be realized as the boundary of a certain complex manifold. Several Complex Variables is the subject to study the properties and structures of holomorphic functions, complex manifolds and CR manifolds. Different from one complex variable, if $n>1$ one can never find a holomorphic function over the punctured ball that blows up at its center. This is the striking phenomenon that Hartogs discovered about 100 years ago, which opened up the first page of the subject. Then Poincar\'e, E. Cartan, Oka, etc, further explored this field and laid down its foundation. Nowadays as the subject is intensively interacting with other fields, providing important examples, methods and problems, the basic materials in Several Complex Variables have become mandatory for many investigations in pure mathematics. This class tries to serve such a purpose, by presenting the following fundamental topics from Several Complex Variables.
(a)Holomorphic functions, plurisubharmonic functions, pseudoconvex domains and the Cauchy-Riemann structure on the boundary of complex manifolds
(b) H\"ormander's $L2$-estimates for the $\bar \partial$-equation and the Levi problem \noindent
(c) Cauchy-Riemann geometry, Webster's pseudo-Hermitian Geometry and subelliptic analysis on CR manifolds
(d) Complex manifolds, holomorphic vector bundles, Kahler Geomtry.
### Introduction to Differential Geometry
Text: Lee, Introduction to Smooth Manifolds
Prerequisites: Point set topology
Description: Possible topics include differential forms on manifolds, vector bundles & curvature, Riemannian metrics & geodesics, and symplectic forms & moment maps. People interested in this course should e-mail me at ctw@math.rutgers.edu and let me know their background and interests.
### Algebraic Geometry I
Text: No textbook.
Prerequisites: 640:503 and 640: 551 or permission of instructor
Description: Algebraic geometry is a study of solutions of polynomial equations. The course will be an overview of basic examples and techniques. The topics covered are likely to include: coordinate rings of affine and projective varieties, Bezout theorem, introduction to elliptic curves, blowups, introduction to sheaves and schemes, Cech cohomology, line bundles, Hurwitz formula, Riemann-Roch formula. If time permits, other topics may be covered, such as Chern classes and Grothendieck-Riemann-Roch formula, ADE singularities, toric varieties, Grassmannians, birational geometry. I will focus on the geometric intuition behind the algebraic constructions and will try to include as many examples of algebraic varieties as possible.
Text: Introduction to Algebraic Topology I, by Allen Hatcher. Publisher: Cambridge University Press; 1st edition (December 3, 2001) • Language: English • ISBN-10: 9780521795401 • ISBN-13: 978-0521795401 • ASIN: 0521795400 This book is available for $32 in paperback from Cambridge University Press, as well as (for free) online. Prerequisites: None Description: This course will be an introduction to the fundamental group and homology theory. The plan is to cover chapters 1,2 and parts of chapter 4 in Hatcher's book. Topics include: fundamental group, Van Kampen's Theorem, covering spaces homotopy groups and the homotopy category simplicial and singular homology, Brouwer's fixed-point theorem the Borsuk-Ulam theorem, and the Jordan-Brouwer separation theorem. ### Abstract Algebra I Text: Main text, N. Jacobson, Basic Algebra I, II (2nd edition, 1985) These books are available in paperback for under$20 from Dover Publications (2009). (ISBN: 0486471896 and 048647187X) There are supplementary handouts for: bilinear forms over fields, simple/semisimple algebras, and group representations. the end
Prerequisites: Standard course in Abstract Algebra for undergraduate students at the level of our Math 441.
Description: This is a standard course for beginning graduate students. It covers Group Theory, basic Ring & Module theory, and bilinear forms. Group Theory: Basic concepts, isomorphism theorems, normal subgroups, Sylow theorems, direct products and free products of groups. Groups acting on sets: orbits, cosets, stabilizers. Alternating/Symmetric groups. Basic Ring Theory: Fields, Principal Ideal Domains (PIDs), matrix rings, division algebras, field of fractions. Modules over a PID: Fundamental Theorem for abelian groups, application to linear algebra: rational and Jordan canonical form. Bilinear Forms: Alternating and symmetric forms, determinants. Spectral theorem for normal matrices, classification over R and C. (Class supplement provided) Modules: Artinian and Noetherian modules. Krull-Schmidt Theorem for modules of finite length. Simple modules and Schur's Lemma, semisimple modules. (from Basic Algebra II) Finite-dimensional algebras: Simple and semisimple algebras, Artin-Wedderburn Theorem, group rings, Maschke's Theorem. (Class supplement provided)
### Selected Topics in Algebra
Subtitle: Methods in Vertex Operator Algebra Theory
Text: I. Frenkel, J. Lepowsky and A. Meurman, Vertex Operator Algebras and the Monster, Pure and Applied Mathematics, Vol. 134, Academic Press, 1988. Supplementary Texts: I. Frenkel, Y.-Z. Huang and J. Lepowsky, On Axiomatic Approaches to Vertex Operator Algebras and Modules, Memoirs AMS, Vol. 104, Number 494, 1993; C. Dong and J. Lepowsky, Generalized Vertex Algebras and Relative Vertex Operators, Progress in Math., Vol. 112, Birkhauser, Boston, 1993; J. Lepowsky and H. Li, Introduction to Vertex Operator Algebras and Their Representations, Progress in Math., Vol. 227, Birkhauser, Boston, 2003.
Prerequisites: Some familiarity with the basics of vertex operator algebra theory. Students without such experience and potentially interested in this course are encouraged to consult Professor Lepowsky.
Description: We will construct lattice vertex operator algebras and generalizations, and twisted modules for such structures. This will lead to the construction of the moonshine module vertex operator algebra and of the action of the Monster sporadic finite simple group. Y.-Z. Huang's complementary method for constructing the moonshine module vertex operator algebra, using tensor category structure, will also be discussed. These ideas will be related to relevant research papers and to current research problems.
Please note: The Lie Groups/Quantum Mathematics Seminar, which will meet Fridays at 12:00, will sometimes be related to the subjects of the course. Students planning to take the course should also try to arrange to attend the seminar, although the seminar will not be required for the course.
### Selected Topics in Logic
Subtitle: Choice versus Determinacy: An Introduction to Classical Descriptive Set Theory
Text: No Text required
Prerequisites: A knowledge of basic set theory, including cardinals, ordinals and the axiom of choice.
Description: The Axiom of Choice (AC) implies the existence of various pathologies on the real line R such as a non-measurable set of reals, an uncountable set of reals with no perfect subset, a 2-coloring of the 2-subsets of R with no uncountable monochromatic subset, etc. In this course, we will explore the question of whether such pathologies can arise within the "definable sets" of reals; namely, the Borel sets, the analytic sets, ... , etc. While these questions cannot be fully settled using the usual ZFC axioms of set theory, we will find that they all have negative answers if we assume the Axiom of Projective Determinacy (PD): an extra set-theoretic axiom which posits the existence of winning strategies for a broad class of infinite 2-player games. The course will cover the following topics: (1) Basic Descriptive Set Theory We will study the hierarchies of the Borel sets and projective sets, and analyze the structure of the sets in each of these hierarchies. (2) Determinacy We will study infinite 2-player games played on the real line R. We shall see that the existence of winning strategies for suitably defined games implies the non-existence of set-theoretic pathologies within the projective hierarchy. (3) Applications We will study various applications to the theory of Borel equivalence relations and the theory of the Turing degrees of relative computability.
### Special Topics in Number Theory
Subtitle: THE RIEMANN ZETA FUNCTION
Text: Much of the material will be taken from the best research publications. For reading I recommend the classical book by E.C. Titchmarsh, The Theory of the Riemann Zeta Function, Clarendon Press, Oxford 1986.
Prerequisites: There are no special requirements as prerequisites. Students are assumed to have some skill in basic complex variable analysis.
Description: This course is for graduate students who would like to learn what is known up to date in the theory of the Riemann zeta function. Of course the Riemann Hypothesis will be given the most attention; however many other related questions and unconditional results will be presented in considerable details. Selected topics are: -Functional properties - Subconvexity estimates -Zero-free regions -Density of zeros off the critical line -Zeros on the critical line -Gaps between zeros -Pair correlation theory -Random matrix statistics -Heuristics beyond the Riemann Hypothesis Lectures will be on Tuesdays and Fridays, 12noon – 1:20 pm in Hill Center, room 423
### Topics in Number Theory
Subtitle: Number Theory and Modular Forms
Text: Recommended text: A Course in Arithmetic, J.-P. Serre, Springer Graduate Texts in Mathematics.
Prerequisites: Must have taken or be currently enrolled in 501, 503, and 551.
Description: The course will give an overview of problems in number theory that can be solved using modular forms and L-functions. It is intended to introduce beginning graduate students to modern areas of number theory. Topics include the prime number theorem, Dirichlet's theorem on primes in progressions, and the number of ways to represent integers as a sum of squares. Depending on the interests of the students I will cover some other material such as the circle method or an overview of Wiles' proof of Fermat's last theorem. Please check my website for updated and a more precise syllabus.
### Methods of Applied Mathematics I
Text: M.Greenberg, Advanced Engineering Mathematics(second edition); Prentice, 1998 (ISBN# 0-13-321431-1))
Prerequisites: Topics the student should know, together with the courses in which they are taught at Rutgers, are as follows: Introductory Linear Algebra (640:250); Multivariable Calculus (640:251); Elementary Differential Equations (640:244 or 640:252). The course Advanced Calculus for Engineering (640:421), which covers Laplace transforms, trigonometric series, and introductory partial differential equations, is a valuable preparation for Math 527, but is not required. Students uncertain of their preparation for this course should consider taking 640:421, or consult with the instructor.
Description: This is a first-semester graduate course, intended primarily for students in mechanical and aerospace engineering, biomedical engineering, and other engineering programs. Topics include power series and the method of Frobenius for solving differential equations; Laplace transforms; nonlinear differential equations and phase plane methods; vector spaces of functions and orthonormal bases; Fourier series and Sturm-Liouville theory; Fourier transforms; and separation of variables and other elementary solution methods for linear differential equations of physics, including the heat, wave, and Laplace equations. More information is on the www.math.rutgers.edu/courses/527/
### Linear Algebra and Applications
Text: Gilbert Strang, "Linear Algebra and its Applications", 4th edition, ISBN #0030105676, Brooks/Cole Publishing, 2007
Prerequisites: Familiarity with matrices, vectors, and mathematical reasoning at the level of advanced undergraduate applied mathematics courses.
Description: Note: This course is intended for graduate students in science, engineering and statistics.
This is an introductory course on vector spaces, linear transformations, determinants, and canonical forms for matrices (Row Echelon form and Jordan canonical form). Matrix factorization methods (LU and QR factorizations, Singular Value Decomposition) will be emphasized and applied to solve linear systems, find eigenvalues, and diagonalize quadratic forms. These methods will be developed in class and through homework assignments using MATLAB. Applications of linear algebra will include Least Squares Approximations, Discrete Fourier Transform, Differential Equations, Image Compression, and Data-base searching.
Grading: Written mid-term exam, homework, MATLAB projects, and a written final exam.
### Statistical Mechanics I: Equilibrium
Text: There are many textbooks on statistical mechanics. Each of them has much useful material. I strongly recommend that you look through some of them and find one which just suits you. Some Recommendation are: (1) H.B. Callan, Thermodynamics, John Wiley Sons, New York, 1960. Chapter 1, (2) J.W. Gibbs, Elementary Principles in Statistical Mechanics. Dover Publications, Introduction, (3) B. Simon, The Statistical Mechanics of Lattice Gases, Volume I, Princeton University Press, 1993, p. 3-34, (4) T.C. Dorlas, Statistical Mechanics, Fundamentals and Model Solutions, Institute of Physics Publishing, 1999, p. 44-45, 63-66, (5) C. Garrod, Statistical Mechanics and Thermodynamics. Oxford University Press, 1995, part of Chapter 2, (6) L.E. Reichl, A Modern Course in Statistical Physics, University of Texas Press, Austin, 1980, (7) D. Ruelle, Statistical Mecahanics: Rigorous Results, World Scientific, (8) S. Brush, The Kind of Motion We Call Heat, North Holland, p. 1-14. Please also look at the publication list on my web page.
Prerequisites:
Description: The course will cover traditional areas of statistical mechanics with a mathematical flavor. It will describe exact results where available and heuristic physical arguments where applicable. A rough outline is given below: (I.) Overview: microscopic vs. macroscopic descriptions; microscopic dynamics and thermodynamics (II.) Energy surface; microcanonical ensemble; ideal gases; Boltzmann’s entropy, typicality (III.) Alternate equilibrium ensembles; canonical, grand-canonical, pressure, etc. Partition functions and thermodynamics (IV.) Thermodynamic limit; existence; equivalence of ensembles; Gibbs measures. (V.) Cooperative phenomena: phase diagrams and phase transitions; probabilities, correlations and partition functions. Law of large numbers, fluctuations, large deviations. (VI.) Ising model, exact solutions. Griffith’s, FKG and other inequalities; Peierle’s argument; Lee-Yang theorems. (VII.) High temperature; low temperature expansions; Pirogov-Sinai theory (VIII.) Fugacity and density expansions (IX.) Mean field theory and long range potentials (X.) Approximate theories: integral equations, Percus-Yevick, hypernetted chain. Debye-H¨uckel theory. (XI.) Critical phenomena: universality, renormalization group. (XII.) Percolation and stochastic L¨oewner evolution. If you have any questions about the course please email me: lebowitz@math.rutgers.edu. We can then set up a time to meet.
### Numerical Analysis I
This course is part of the Mathematical Finance Master's Degree Program.
### Combinatorics I
Text: (1.) The Probabilistic Method, by Alon and Spencer (2.) Enumerative Combinatorics I and II, by Stanley (3.) Combinatorial Problems and Exercises, by Lovasz (4.) A Course in Combinatorics, by van Lint and Wilson
Prerequisites: Linear algebra, some discrete probability and mathematical maturity.
Description: We will study basic topics in Combinatorics such as enumeration, symmetry, polyhedral combinatorics, partial orders, set systems, Ramsey theory, discrepancy, additive combinatorics and quasirandomness. There will be emphasis on general techniques, including probabilistic methods, linear-algebra methods, analytic methods, topological methods and geometric methods. There will be problem sets every 2-3 weeks. There are no exams.
### Selected Topics in Discrete Mathematics
Subtitle: Probabilistic Methods in Combinatorics
Text: Alon-Spencer, The Probabilistic Method (optional, but useful).
Prerequisites: Prerequisites: I will try to make the course self-contained except for basic combinatorics and very basic probability. See me if in doubt.
Description: We will discuss applications of probabilistic ideas to problems in combinatorics and related areas (e.g. geometry, graph theory, complexity theory). We will also at least touch on topics, such as percolation and mixing rates for Markov chains, which are interesting from both combinatorics/TCS and purely probabilistic viewpoints.
### Topics in Probability and Ergodic Theory II
Text: I will not follow any one text, and I will provide class notes. I will draw upon the following texts mostly: (1) Bernt Oksendahl, Stochastic Differential Equations: An Introduction with Applications, Springer, latest edition (2) I. Karatzas and S. Shreve, Brownian Motion and Stochastic Calculus, Springer, second edition. (3) L.C.G. Rogers and D. Williams, Diffusions, Markov Processes, and Martingales, Volumes I and II, Cambridge (4) D. Revuz and M.Yor, Continuous Martingales and Brownian Motion, Springer, third edition. The first two books are available in paperback and are excellent for a first introduction to the subject; the Oksendahl text is more elementary and less rigorous; Karatzas and Shreve fill in many of the theoretical details.
Prerequisites: An introductory course to probability using measure theory and including conditional expectation and martingales in discrete time.
Description: This course will be an introduction to stochastic integration with respect to martingales and applications to the study of Brownian motion and stochastic differential equations. 1) Brownian Motion, Poisson processes and Levy processes; 2) Stopping times and filtrations; 3) Martingales in continuous time, Doob-Meyer decomposition and quadratic variation; 4) Stochastic Integrals and Ito's rule; 5) Applications to Brownian motion; 6) Stochastic Differential Equations and relations to parabolic equations; 7) The martingale problem, strong and weak solutions; 8) Application to diffusions.
### Mathematical Foundations for Industrial and Systems Engineering
Text: Bartle and Sherbert, Introduction to Real Analysis, 3rd Edition, Wiley & sons, 1992.
Prerequisites: None
Description: This course is offered specifically for graduate students in Industrial Engineering. Proof Structure for the Development of Concepts Based on the Real Numbers Axioms for the Real Numbers Logical Principles The Continuity Axiom The supremum concept and useful implications Convergence of sequences and series Development of the Calculus of Functions of One Variable Continuous functions and basic properties Differentiable functions and basic properties (the Mean value Theorem and Taylor's Theorem) The Riemann Integral and its basic properties The Fundamental Theorem of Calculus and implications Uniform convergence of sequences of functions
### Mathematical Finance I
This course is part of the Mathematical Finance Master's Degree Program.
### Credit Risk Modeling
This course is part of the Mathematical Finance Master's Degree Program.
### Portfolio Theory and Applications
This course is part of the Mathematical Finance Master's Degree Program.
### High Frequency Finance
This course is part of the Mathematical Finance Master's Degree Program.
### Selected Topics in Mathematical Finance
This course is part of the Mathematical Finance Master's Degree Program.
### Seminar in Mathematical Finance
This course is part of the Mathematical Finance Master's Degree Program.
### Topics in Mathematical Physics
Subtitle: Non-equilibrium statistical mechanics
Text: The lectures will be based on published articles and books freely available on the web.
Prerequisites: Thermodynamics, Real analysis and basic linear algebra; ordinary differential equations at a basic level will help.
Description: The course will cover the following topics: Equilibrium ensembles and the ergodic hypothesis Examples of ergodic systems and criticism of the need of the assumption Chaotic and ordered motions: the paradigmatic examples of quasi periodi motions nad of the geodesic motion on negative curvature surfaces The SRB distributions as extensions of the Gibbs' ensemmles Universality of non equilibrium fluctuations Simulations: thermostats and irreversibility Does entropy extend to stationary nonequilibrium? Equivalence of thermostats as an extension of the equivalence between ensembles of equilibrium statistical mechanics
### Topics in Mathematical Physics
Subtitle: Intro. to Dispersive Equations of Mathematical Physics, via Functional Analysis and Spectral Theory
Text: Functional Analysis by Reed & Simon, part I. Self-Adjointness, Reed & Simon II. The rest from notes and papers.
Prerequisites: Real analysis and basic linear algebra; ordinary differential equations at a basic level will help.
Description: We begin with basic notions of Analysis: Hilbert spaces and linear operators. Then compact operators and their applications in PDE and Math-Phys. Then, we go through the Spectral Theorem for general self-adjoint operators, and some immediate applications.Then, we turn to spectral theory: Notions of spectrum, eigenfunctions of Schroedinger type operators, Properties of continuous spectrum, decay estimates and scattering.
### Topics in Mathematical Physics
Subtitle: Mathematics and Mechanics of Materials
Text: Electrodynamics of Continuous Media by Landau, Lifshitz and Pitaevskii
Prerequisites: Advanced Calculus or Real Analysis, and Linear Algebra at the senior or graduate level, which may be replaced by a course on Applied Mathematics at the graduate level from the School of Engineering.
Description: The one-semester topic course introduces how differential equations are derived or motivated from fundamental physical laws and how empirical physical laws can be eventually interpreted by solutions to partial differential equations in mechanics, physics and materials science. Particular emphases are on the field theories for continuum media. Students in mathematics may benefit from the physical motivations of some classical equations while engineering students may benefit from the level of rigor and typical solution methods to differential equations. Precise topics will be adjusted according to the specific interest of students who take this course, but likely include the following: nonlinear elasticity, two-well problem, electrostatics and magnetostatics of continuum media, modeling of multifunctional materials.
This page was last updated on August 01, 2017 at 11:00 am and is maintained by grad-director@math.rutgers.edu.
|
2018-09-24 17:35:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5561395883560181, "perplexity": 1105.9135149501788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160620.68/warc/CC-MAIN-20180924165426-20180924185826-00536.warc.gz"}
|
https://pecanproject.github.io/pecan-documentation/v1.5.1/input-conversion.html
|
30 Input Conversion
Three Types of data conversions are discussed below: Meteorological data, Vegetation data, and Soil data. Each section provides instructions on how to convert data from their raw formats into a PEcAn standard format, whether it be from a database or if you have raw data in hand.
30.1 Meterological Data conversion
30.1.1 Adding a function to PEcAn to convert a met data source
In general, you will need to write a function to download the raw met data and one to convert it to the PEcAn standard.
Downloading raw data function are named download.<source>.R. These functions are stored within the PEcAn directory: /modules/data.atmosphere/R.
Conversion function from raw to standard are named met2CF.<source>.R. These functions are stored within the PEcAn directory: /modules/data.atmosphere/R.
Current Meteorological products that are coupled to PEcAn can be found in our Available Meteorological Drivers page.
Note: Unless you are also adding a new model, you will not need to write a script to convert from PEcAn standard to PEcAn models. Those conversion scripts are written when a model is added and can be found within each model’s PEcAn directory.
30.1.2 Dimensions:
CF standard-name units
time days since 1700-01-01 00:00:00 UTC
longitude degrees_east
latitude degrees_north
General Note: dates in the database should be date-time (preferably with timezone), and datetime passed around in PEcAn should be of type POSIXct.
30.1.3 The variable names should be standard_name
CF standard-name units bety isimip cruncep narr ameriflux
air_temperature K airT tasAdjust tair air TA (C)
air_pressure Pa air_pressure PRESS (KPa)
mole_fraction_of_carbon_dioxide_in_air mol/mol CO2
moisture_content_of_soil_layer kg m-2
soil_temperature K soilT TS1 (NOT DONE)
relative_humidity % relative_humidity rhurs NA rhum RH
specific_humidity 1 specific_humidity NA qair shum CALC(RH)
water_vapor_saturation_deficit Pa VPD VPD (NOT DONE)
surface_downwelling_longwave_flux_in_air W m-2 same rldsAdjust lwdown dlwrf Rgl
surface_downwelling_photosynthetic_photon_flux_in_air mol m-2 s-1 PAR PAR (NOT DONE)
precipitation_flux kg m-2 s-1 cccc prAdjust rain acpc PREC (mm/s)
degrees wind_direction WD
wind_speed m/s Wspd WS
eastward_wind m/s eastward_wind CALC(WS+WD)
northward_wind m/s northward_wind CALC(WS+WD)
• preferred variables indicated in bold
• wind_direction has no CF equivalent and should not be converted, instead the met2CF functions should convert wind_direction and wind_speed to eastward_wind and northward_wind
• standard_name is CF-convention standard names
• units can be converted by udunits, so these can vary (e.g. the time denominator may change with time frequency of inputs)
• soil moisture for the full column, rather than a layer, is soil_moisture_content
• A full list of PEcAn standard variable names, units and dimensions can be found here: https://github.com/PecanProject/pecan/blob/develop/base/utils/data/standard_vars.csv
For example, in the MsTMIP-CRUNCEP data, the variable rain should be precipitation_rate. We want to standardize the units as well as part of the met2CF.<product> step. I believe we want to use the CF “canonical” units but retain the MsTMIP units any time CF is ambiguous about the units.
The key is to process each type of met data (site, reanalysis, forecast, climate scenario, etc) to the exact same standard. This way every operation after that (extract, gap fill, downscale, convert to a model, etc) will always have the exact same inputs. This will make everything else much simpler to code and allow us to avoid a lot of unnecessary data checking, tests, etc being repeated in every downstream function.
30.1.4 Adding Single-Site Specific Meteorological Data
Perhaps you have meteorological data specific to one site, with a unique format that you would like to add to PEcAn. Your steps would be to: 1. write a script or function to convert your files into the netcdf PEcAn standard 2. insert that file as an input record for your site following these instructions
30.1.5 Processing Met data outside of the workflow using PEcAn functions
Perhaps you would like to obtain data from one of the sources coupled to PEcAn on its own. To do so you can run PEcAn functions on their own.
30.1.5.1 Example 1: Processing data from a database
raw.file <-PEcAn.data.atmosphere::download.AmerifluxLBL(sitename = "US-NR1",
outfolder = ".",
start_date = "2004-01-01",
end_date = "2004-12-31")
Using the information returned as the object raw.file you will then convert the raw files into a standard file.
Open a connection with BETY. You may need to change the host name depending on what machine you are hosting BETY. You can find the hostname listed in the machines table of BETY.
bety <- dplyr::src_postgres(dbname = 'bety',
host ='localhost',
user = "bety",
con <- bety$con Next you will set up the arguments for the function in.path <- '.' in.prefix <- raw.file$dbfile.name
outfolder <- '.'
format.id <- 5000000002
format <- PEcAn.DB::query.format.vars(format.id=format.id,bety = bety)
lon <- -105.54
lat <- 40.03
format$time_zone <- "America/Chicago" Note: The format.id can be pulled from the BETY database if you know the format of the raw data. Once these arguments are defined you can execute the met2CF.csv function PEcAn.data.atmosphere::met2CF.csv(in.path = in.path, in.prefix =in.prefix, outfolder = ".", start_date ="2004-01-01", end_date = "2004-12-01", lat= lat, lon = lon, format = format) 30.1.5.2 Example 2: Processing data from data already in hand If you have Met data already in hand and you would like to convert into the PEcAn standard follow these instructions. Update BETY with file record, format record and input record according to this page How to Insert new Input Data If your data is in a csv format you can use the met2CF.csvfunction to convert your data into a PEcAn standard file. Open a connection with BETY. You may need to change the host name depending on what machine you are hosting BETY. You can find the hostname listed in the machines table of BETY. bety <- dplyr::src_postgres(dbname = 'bety', host ='localhost', user = "bety", password = "bety") con <- bety$con
Prepare the arguments you need to execute the met2CF.csv function
in.path <- 'path/where/the/raw/file/lives'
in.prefix <- 'prefix_of_the_raw_file'
outfolder <- 'path/to/where/you/want/to/output/thecsv/'
format.id <- formatid of the format your created
format <- PEcAn.DB::query.format.vars(format.id=format.id,bety = bety)
lon <- longitude of your site
lat <- latitude of your site
format$time_zone <- time zone of your site start_date <- Start date of your data in "y-m-d" end_date <- End date of your data in "y-m-d" Next you can execute the function: PEcAn.data.atmosphere::met2CF.csv(in.path = in.path, in.prefix =in.prefix, outfolder = ".", start_date = start_date, end_date = end_date, lat= lat, lon = lon, format = format) 30.2 Vegetation Data Vegetation data will be required to parameterize your model. In these examples we will go over how to produce a standard initial condition file. The main function to process cohort data is the ic.process.R function. As of now however, if you require pool data you will run a separate function, pool_ic_list2netcdf.R. 30.2.0.1 Example 1: Processing Veg data from data in hand. In the following example we will process vegetation data that you have in hand using PEcAn. First, you’ll need to create a input record in BETY that will have a file record and format record reflecting the location and format of your file. Instructions can be found in our How to Insert new Input Data page. Once you have created an input record you must take note of the input id of your record. An easy way to take note of this is in the URL of the BETY webpage that shows your input record. In this example we use an input record with the id 1000013064 which can be found at this url: https://psql-pecan.bu.edu/bety/inputs/1000013064# . Note that this is the Boston University BETY database. If you are on a different machine, your url will be different. With the input id in hand you can now edit a pecan XML so that the PEcAn function ic.process will know where to look in order to process your data. The inputs section of your pecan XML will look like this. As of now ic.process is set up to work with the ED2 model so we will use ED2 settings and then grab the intermediary Rds data file that is created as the standard PEcAn file. For your Inputs section you will need to input your input id wherever you see the source.ic flag. <inputs> <css> <source>FFT</source> <output>css</output> <username>pecan</username> <source.id>1000013064</source.id> <useic>TRUE</useic> <meta> <trk>1</trk> <age>70</age> </meta> </css> <pss> <source>FFT</source> <output>pss</output> <username>pecan</username> <source.id>1000013064</source.id> <useic>TRUE</useic> </pss> <site> <source>FFT</source> <output>site</output> <username>pecan</username> <source.id>1000013064</source.id> <useic>TRUE</useic> </site> <met> <source>CRUNCEP</source> <output>ED2</output> </met> <lu> <id>294</id> </lu> <soil> <id>297</id> </soil> <thsum> <id>295</id> </thsum> <veg> <id>296</id> </veg> </inputs> Once you edit your PEcAn.xml you can than create a settings object using PEcAn functions. Your pecan.xml must be in your working directory. settings <- PEcAn.settings::read.settings("pecan.xml") settings <- PEcAn.settings::prepare.settings(settings, force=FALSE) You can then execute the ic.process function to convert data into a standard Rds file: input <- settings$run$inputs dir <- "." ic.process(settings, input, dir, overwrite = FALSE) Note that the argument dir is set to the current directory. You will find the final ED2 file there. More importantly though you will find the .Rds file within the same directory. 30.2.0.2 Example 3 Pool Initial Condition files If you have pool vegetation data, you’ll need the pool_ic_list2netcdf.R function to convert the pool data into PEcAn standard. The function stands alone and requires that you provide a named list of netcdf dimensions and values, and a named list of variables and values. Names and units need to match the standard_vars.csv table found here. #Create a list object with necessary dimensions for your site input<-list() dims<- list(lat=-115,lon=45, time= 1) variables<- list(SoilResp=8,TotLivBiom=295) input$dims <- dims
input\$vals <- variables
Once this is done, set outdir to where you’d like the file to write out to and a siteid. Siteid in this can be used as an file name identifier. Once part of the automated workflow siteid will reflect the site id within the BET db.
outdir <- "."
siteid <- 772
pool_ic_list2netcdf(input = input, outdir = outdir, siteid = siteid)
You should now have a netcdf file with initial conditions.
30.3 Soil Data
30.3.0.1 Example 1: Converting Data in hand
Local data that has the correct names and units can easily be written out in PEcAn standard using the function soil2netcdf.
soil.data <- list(volume_fraction_of_sand_in_soil = c(0.3,0.4,0.5),
volume_fraction_of_clay_in_soil = c(0.3,0.3,0.3),
soil_depth = c(0.2,0.5,1.0))
soil2netcdf(soil.data,"soil.nc")
At the moment this file would need to be inserted into Inputs manually. By default, this function also calls soil_params, which will estimate a number of hydraulic and thermal parameters from texture. Be aware that at the moment not all model couplers are yet set up to read this file and/or convert it to model-specific formats.
30.3.0.2 Example 2: Converting PalEON data
In addition to location-specific soil data, PEcAn can extract soil texture information from the PalEON regional soil product, which itself is a subset of the MsTMIP Unified North American Soil Map. If this product is installed on your machine, the appropriate step in the do_conversions workflow is enabled by adding the following tag under <inputs> in your pecan.xml
<soil>
<id>1000012896</id>
</soil>
In the future we aim to extend this extraction to a wider range of soil products.
|
2019-09-20 18:57:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23614521324634552, "perplexity": 7685.464573112533}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574058.75/warc/CC-MAIN-20190920175834-20190920201834-00299.warc.gz"}
|
https://www.studysmarter.us/textbooks/business-studies/horngrens-financial-and-managerial-accounting-6th/responsibility-accounting-and-performance-evaluation/24-pgb-one-subunit-of-track-sports-company-had-the-following/
|
Suggested languages for you:
24 PGB
Expert-verified
Found in: Page 1324
### Horngren'S Financial And Managerial Accounting
Book edition 6th
Author(s) Tracie L. Miller-Nobles, Brenda L. Mattison
Pages 992 pages
ISBN 9780134486833
# One subunit of Track Sports Company had the following financial results last month:Subunit X Actual Results Flexible Budget Flexible Budget % Variance Variance (F or U) (F or U)Net Sales Revenue $474,000$ 455,000Variable Expenses 261,000 255,000Contribution Margin 213,000 200,000Traceable Fixed Expenses 38,000 29,000Divisional Segment Margin $175,000$ 171,000Requirements1. Complete the performance evaluation report for this subunit (round to two decimal places).2. Based on the data presented and your knowledge of the company, what type of responsibility center is this subunit?3. Which items should be investigated if part of management’s decision criteria is to investigate all variances equal to or exceeding $8,000 and exceeding 10% (both criteria must be met)?4. Should only unfavorable variances be investigated? Explain.5. Is it possible that the variances are due to a higher-than-expected sales volume? Explain.6. Will management place equal weight on each of the variances exceeding$8,000? Explain.7. Which balanced scorecard perspective is being addressed through this performance report? In your opinion, is this performance report a lead or a lag indicator? Explain.8. List one key performance indicator for the three other balanced scorecard perspectives. Make sure to indicate which perspective is being addressed by the indicators you list.
(1) Performance evaluation report is completed in Step 1.
(2) Profit center
(3) Variance equal to and exceeding $8,000 and exceeding 10% should be investigated. (4) No, both should be investigated, for effective decision making. (5) No, not possible as fixed expenses may vary (6) No, equal weights cannot be assigned due to degree of variances. (7) Financial performance and operational performance (8) Customer, Internal Business, and Learning & Growth See the step by step solution ### Step by Step Solution ## Performance Evaluation Report Subunit X Actual Results Flexible Budget Flexible Budget Variance (F or U) % Variance (F or U) Net Sales Revenue$ 474,000 $455,000$ 19,000 (F) 4.18% (F) Variable Expenses 261,000 255,000 6,000 (U) 2.35% (U) Contribution Margin 213,000 200,000 13,000 (F) 6.5% (F) Traceable Fixed Expenses 38,000 29,000 9,000 (U) 31.03% (U) Divisional Segment Margin 175,000 171,000 4,000 (F) 2.34% (F)
% Variance has been computed by the following formula –
## Responsibility Center
Responsibility centers are created in a decentralized organization to divide the different responsibilities into different subunits so that there can be effective in the decision-making process.
There can be different 4 types of responsibility centers.
Based on the given data, the responsibility center can be categorized as a profit center. The profit center is the center that is responsible for both revenue and cost. The given data set is related to the performance report for Sales revenue and all expenses. So this is a profit center.
## Investigated items
The management has set the criteria are for investigating all items having variances –
a) Equal to and exceeding $8,000 and b) Exceeding 10% As it is necessary that both conditions must be met, so only the “Traceable Fixed Expenses” should from the performance report be investigated as it has a$9,000 unfavorable variance and variance is 31% (exceeding 10%).
Besides this, items having equal to or more than $8,000 variance has no variance degree exceeding 10%. ## Investigated variances – favorable vs unfavorable Favorable variances are those that perform better than the flexible budget. On the contrary, unfavorable variances are those that perform against the flexible budget. Businesses give priority to Unfavorable variances to get the reason for having an unfavorable estimate. But at the same time investigating favorable variances also helps in knowing the factors for having favorable results. Thus for effective decision making both favorable and unfavorable variances should be investigated. ## Variances and sales volume In the given case, the responsibility center is the profit center. And in the profit center, both revenue and cost are controlled. For every generated revenue, there are some costs associated with it. But some costs are also independent of sales like fixed costs. Variances are the difference between estimated and actual results. So in the given case, the variable cost may vary as per the sales level but the fixed cost variance would not be associated with the sales revenue variance. Thus it is not possible that variances are due to higher-than-expected sales volume. Irrespective of sales volume, the fixed expense may vary from the expected amount. ## Management’s decision on variances In the given case, some variances are having$8,000 value but do not vary very much from the flexible budget. On the contrary, some variances exceeding $8,000 has even higher variance percent from the benchmarking. As the management is investigating variances above and equal to$8,000 and also exceeding the 10% benchmark, it will not place equal weight on all variances exceeding \$8,000.
The management would place more weight on the items having higher variance than on the items having lower variance. Thus Variance having a 31% result would be given more priority than the variance having only a 10% change.
## Balanced Scorecard Perspective
The balanced scorecard is a performance evaluation report that consists of both financial performance and operational performance.
So the perspective of the balanced scorecard is based on these two performance types.
In the given case, the perspective of the balanced scorecard is Financial. This is so as the key performance indicators are – contribution margin, fixed expenses, Divisional segment margin, etc.
This performance measure is a lag indicator. The reason is that the fanatical performance reported tends to reveal the results of past action only and no indication has been generated for future performance.
## Other Balanced Scorecard Perspective and key indicators
The other three different perspectives are – Customer, Internal Business, and Learning & Growth.
The key performance indicator for each perspective is as follows –
Perspective Key Performance Indicator Customer Percentage of market share Internal Business Number of new products developed Learning and Growth Hours of employee training
## Recommended explanations on Business-studies Textbooks
94% of StudySmarter users get better grades.
|
2022-09-28 13:59:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24019144475460052, "perplexity": 5846.994959715656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00198.warc.gz"}
|
https://www.zbmath.org/?q=ci%3A0749.60076
|
# zbMATH — the first resource for mathematics
The critical barrier for the survival of branching random walk with absorption. (English. French summary) Zbl 1263.60076
Author’s abstract: We study a branching random walk on $$\mathbb{R}$$ with an absorbing barrier. The position of the barrier depends on the generation. In each generation, only the individuals born below the barrier survive and reproduce. Given a reproduction law, J. D. Biggins et al. [Ann. Appl. Probab. 1, No. 4, 573–581 (1991; Zbl 0749.60076)] determined whether a linear barrier allows the process to survive. In this paper, we refine their result: in the boundary case in which the speed of the barrier matches the speed of the minimal position of a particle in a given generation, we add a second order term $$an^{1/3}$$ to the position of the barrier for the $$n$$th generation and find an explicit critical value $$a_{c}$$ such that the process dies when $$a<a_{c}$$ and survives when $$a>a_{c}$$. We also obtain the rate of extinction when $$a<a_{c}$$ and a lower bound for the population when it survives.
##### MSC:
60J80 Branching processes (Galton-Watson, birth-and-death, etc.)
Full Text:
##### References:
[1] L. Addario-Berry and N. Broutin. Total progeny in killed branching random walk. Probab. Theory Related Fields 151 (2011) 265-295. · Zbl 1230.60091 · doi:10.1007/s00440-010-0299-2 [2] E. Aïdékon. Tail asymptotics for the total progeny of the critical killed branching random walk. Electron. Commun. Probab. 15 (2010) 522-533. · Zbl 1226.60117 · emis:journals/EJP-ECP/_ejpecp/ECP/viewarticle44eb.html [3] E. Aïdékon, Y. Hu and O. Zindy. The precise tail behavior of the total progeny of a killed branching random walk. Preprint, 2011. Available at [math.PR]. 1102.5536 · Zbl 1288.60105 · arxiv.org [4] E. Aïdékon and B. Jaffuel. Survival of branching random walks with absorption. Stochastic Process. Appl. 121 (2011) 1901-1937. · Zbl 1236.60080 · doi:10.1016/j.spa.2011.04.006 [5] V. I. Arnol’d. Ordinary Differential Equations . MIT Press, Cambridge, MA, 1973. Translated and edited by R. A. Silverman. · Zbl 0296.34001 [6] J. D. Biggins and A. E. Kyprianou. Seneta-Heyde norming in the branching random walk. Ann. Probab. 25 (1997) 337-360. · Zbl 0873.60062 · doi:10.1214/aop/1024404291 [7] J. D. Biggins, B. D. Lubachevsky, A. Shwartz and A. Weiss. A branching random walk with barrier. Ann. Appl. Probab. 1 (1991) 573-581. · Zbl 0749.60076 · doi:10.1214/aoap/1177005839 [8] B. Derrida and D. Simon. The survival probability of a branching random walk in presence of an absorbing wall. Europhys. Lett. 78 (2007). Art. 60006, 6. · Zbl 1244.82071 · doi:10.1209/0295-5075/78/60006 [9] B. Derrida and D. Simon. Quasi-stationary regime of a branching random walk in presence of an absorbing wall. J. Stat. Phys. 131 (2008) 203-233. · Zbl 1144.82321 · doi:10.1007/s10955-008-9504-4 [10] N. Gantert, Y. Hu and Z. Shi. Asymptotics for the survival probability in a killed branching random walk. Ann. Inst. H. Poincaré Probab. Stat. 47 (2011) 111-129. · Zbl 1210.60093 · doi:10.1214/10-AIHP362 · eudml:243873 [11] J. W. Harris and S. C. Harris. Survival probabilities for branching Brownian motion with absorption. Electron. Commun. Probab. 12 (2007) 81-92. · Zbl 1132.60059 · doi:10.1214/ECP.v12-1259 · eudml:128570 [12] H. Kesten. Branching Brownian motion with absorption. Stochastic Processes Appl. 7 (1978) 9-47. · Zbl 0383.60077 · doi:10.1016/0304-4149(78)90035-2 [13] A. A. Mogul’skii. Small deviations in the space of trajectories. Theory Probab. Appl. 19 (1975) 726-736. · Zbl 1122.03015 [14] R. Pemantle. Search cost for a nearly optimal path in a binary tree. Ann. Appl. Probab. 19 (2009) 1273-1291. · Zbl 1176.68093 · doi:10.1214/08-AAP585
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-02-28 13:18:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5989485383033752, "perplexity": 1983.9146674226977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360853.31/warc/CC-MAIN-20210228115201-20210228145201-00311.warc.gz"}
|
https://planetandepoch.com/stewart-%E2%80%93-calculus-%E2%80%93-3-5-%E2%80%93-implicit-differentiation/
|
# Stewart – Calculus – 3.5 – Implicit Differentiation and Derivatives of Inverse Trigonometric Functions
Find $y''$ by implicit differentiation:
$9x^{2}+y^{2}=9$.
$18x+2yy'=0$
$2yy'=-18x$
$y'=-9x/y$
$y''=-9(\frac{y\cdot 1\text{--}x\cdot y'}{y^{2}})$
$y''=-9(\frac{y\cdot 1\text{--}x(-9x/y)}{y^{2}})$ ($y'$ is replaced with $-9x/y$)
$y''=-9(\frac{y^{2}+9x^{2}}{y^{3}})$ (after multiplying by $y$ to eliminate denominator from $-9x$)
$y''=-9(\frac{9}{y^{3}})$ (notice that $y^{2}+9x^{2}$ equals the original equation)
So, $y''=\frac{-81}{y^{3}}$.
Derivatives of Inverse Trigonometric Functions
|
2021-03-05 06:34:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7653642296791077, "perplexity": 1158.41218872218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370239.72/warc/CC-MAIN-20210305060756-20210305090756-00483.warc.gz"}
|
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Symmetry_(Vallance)/01%3A_Chapters/1.30%3A_Appendix_B-_Point_Groups
|
# 1.30: Appendix B- Point Groups
## Non axial groups
$\begin{array}{l|c} C_1 & E \\ \hline A_1 & 1 \end{array} \label{30.1}$
$\begin{array}{l|cc|l|l} C_s & E & \sigma_h & & \\ \hline A & 1 & 1 & x, y , R_z & x^2, y^2, z^2, xy \\ A' & 1 & -1 & z, R_x, R_y & yz, xz \end{array} \label{30.2}$
$\begin{array}{l|cc|l|l} C_1 & E & i & & \\ \hline A_g & 1 & 1 & R_x, R_y, R_z & x^2, y^2, z^2, xy, xz, yz \\ A_u & 1 & -1 & x, y, z & \end{array} \label{30.3}$
## $$C_n$$ groups
$\begin{array}{l|cc|l|l} C_2 & E & C_2 & & \\ \hline A & 1 & 1 & z, R_z & x^2, y^2, z^2, xy \\ B & 1 & -1 & x, y , R_x, R_y & yz, xz \end{array} \label{30.4}$
$\begin{array}{l|c|l|l} C_3 & E \: \: \: \: \: C_3 \: \: \: \: \: C_3^2 & & c=e^{2\pi/3} \\ \hline A & 1 \: \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: \: 1 & x, R_z & x^2 + y^2, z^2 \\ E & \begin{Bmatrix} 1 & c & c^* \\ 1 & c^* & c \end{Bmatrix} & x, y, R_x, R_y, & x^2-y^2, xy, xz, yz \end{array} \label{30.5}$
$\begin{array}{l|c|l|l} C_4 & E \: \: \: \: \: C_4 \: \: \: \: \: C_2 \: \: \: \: \: C_4^3 & & \\ \hline A & 1 \: \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: \: 1 & z, R_z & x^2 + y^2, z^2 \\ B & 1 \: \: \: \: -1 \: \: \: \: \: \: \: \: 1 \: \: \: \: -1 & & x^2 - y^2, xy \\ E & \begin{Bmatrix} 1 & i & -1 & -i \\ 1 & -i & -1 & i \end{Bmatrix} & x, y, R_x, R_y & yz, xz \end{array} \label{30.6}$
## $$C_{nv}$$ groups
$\begin{array}{l|cccc|l|l} C_{2v} & E & C_2 & \sigma_v(xz) & \sigma_v'(yz) & & \\ \hline A_1 & 1 & 1 & 1 & 1 & z & x^2, y^2, z^2 \\ A_2 & 1 & 1 & -1 & -1 & R_z & xy \\ B_1 & 1 & -1 & 1 & -1 & x, R_y & xz \\ B_2 & 1 & -1 & -1 & 1 & y, R_x & yz \end{array} \label{30.7}$
$\begin{array}{l|ccc|l|l} C_{3v} & E & 2C_3 & 3\sigma_v & & \\ \hline A_1 & 1 & 1 & 1 & z & x^2 + y^2, z^2 \\ A_2 & 1 & 1 & -1 & R_z & \\ E & 2 & -1 & 0 & x, y, R_x, R_y & x^2 - y^2, xy, xz, yz \end{array} \label{30.8}$
## $$C_{nh}$$ groups
$\begin{array}{l|cccc|l|l} C_{2h} & E & C_2 & i & \sigma_h & & \\ \hline A_g & 1 & 1 & 1 & 1 & R_z & x^2, y^2, z^2, xy \\ B_g & 1 & -1 & 1 & -1 & R_x, R_y & xz, yz \\ A_u & 1 & 1 & -1 & -1 & z & \\ B_u & 1 & -1 & -1 & 1 & x, y & \end{array} \label{30.9}$
$\begin{array}{l|c|l|l}C_{3h} & E \: \: \: \: \: C_3 \: \: \: \: \: C_3^2 \: \: \: \: \: \sigma_h \: \: \: \: \: S_3 \: \: \: \: \: S_3^5 & & c = e^{2\pi/3} \\ \hline A & 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 & R_z & x^2 + y^2, z^2 \\ E & \begin{Bmatrix} 1 & \: \: c & \: \: c^* & \: \: 1 & \: \: c & \: \: c^* \\ 1 & \: \: c^* & \: \: c & \: \: 1 & \: \: c^* & \: \: c \end{Bmatrix} & x, y & x^2 - y^2, xy \\ A' & 1 \: \: \: \: \: \: 1 \: \: \: \: \: \: 1 \: \: \: \: -1 \: \: \: \: -1 \: \: \: \: \: -1 & z & \\ E' & \begin{Bmatrix} 1 & c & c^* & -1 & -c & -c^* \\ 1 & c^* & c & -1 & -c^* & -c \end{Bmatrix} & R_x, R_y & xz, yz \end{array} \label{30.10}$
## $$D_n$$ groups
$\begin{array}{l|cccc|l|l} D_2 & E & C_2(z) & C_2(y) & C_2(x) & & \\ \hline A & 1 & 1 & 1 & 1 & & x^2, y^2, z^2 \\ B_1 & 1 & 1 & -1 & -1 & z, R_z & xy \\ B_2 & 1 & -1 & 1 & -1 & y, R_y & xz \\ B_3 & 1 & -1 & -1 & 1 & x, R_x & yz \end{array} \label{30.11}$
$\begin{array}{l|ccc|l|l} D_3 & E & 2C_3 & 3C_2 & & \\ \hline A_1 & 1 & 1 & 1 & & x^2 + y^2, z^2 \\ A_2 & 1 & 1 & -1 & z, R_z & \\ E & 2 & -1 & 0 & x, y, R_x, R_y & x^2 - y^2, xy, xz, yz \end{array} \label{30.12}$
## $$D_{nh}$$ groups
$\begin{array}{l|cccccccc|l|l} D_{2h} & E & C_2(z) & C_2(y) & C_2(x) & i & \sigma_xy) & \sigma(xz) & \sigma(yz) & & \\ \hline A_g & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & & x^2, y^2, z^2 \\ B_{1g} & 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 & R_z & xy \\ B_{2g} & 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 & R_y & xz \\ B_{3g} & 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 & R_x & yz \\ A_u & 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 & & \\ B_{1u} & 1 & 1 & -1 & -1 & -1 & -1 & 1 & 1 & z & \\ B_{2u} & 1 & -1 & 1 & -1 & -1 & 1 & -1 & 1 & y & \\ B_{3u} & 1 & -1 & -1 & 1 & -1 & 1 & 1 & -1 & x & \end{array} \label{30.13}$
## $$D_{nd}$$ groups
$\begin{array}{l|ccccc|l|l} D_{2d} & E & 2S_4 & C_2 & 2C_2' & 2\sigma_d & & \\ \hline A_1 & 1 & 1 & 1 & 1 & 1 & & x^2 + y^2, z^2 \\ A_2 & 1 & 1 & 1 & -1 & -1 & R_z & \\ B_1 & 1 & -1 & 1 & 1 & -1 & & x^2 - y^2 \\ B_2 & 1 & -1 & 1 & -1 & 1 & z & xy \\ E & 2 & 0 & -2 & 0 & 0 & x, y, R_x, R_y & xy, yz \end{array} \label{30.14}$
$\begin{array}{l|cccccc|l|l} D_{3d} & E & 2C_3 & 3C_2 & i & 2S_6 & 3\sigma_d & & \\ \hline A_{1g} & 1 & 1 & 1 & 1 & 1 & 1 & & x^2 + y^2, z^2 \\ A_{2g} & 1 & 1 & -1 & 1 & 1 & -1 & R_z & \\ E_g & 2 & -1 & 0 & 2 & -1 & 0 & R_x, R_y & x^2 - y^2, xy, xz, yz \\ A_{1u} & 1 & 1 & 1 & -1 & -1 & -1 & & \\ A_{2u} & 1 & 1 & -1 & -1 & -1 & 1 & z & \\ E_u & 2 & -1 & 0 & -2 & 1 & 0 & x, y & \end{array} \label{30.15}$
### $$C_{\infty v}$$ and $$D_{\infty h}$$
$\begin{array}{l|cccccccc|l|l} D_{\infty h} & E & 2C_\infty^\Phi & \ldots & \infty \sigma_v & i & 2S_\infty^\Phi & \ldots & \infty C_2 & & \\ \hline \Sigma_g^+ & 1 & 1 & \ldots & 1 & 1 & 1 & \ldots & 1 & & x^2 + y^2, z^2 \\ \Sigma_g^- & 1 & 1 & \ldots & -1 & 1 & 1 & \ldots & -1 & R_z & \\ \Pi_g & 2 & 2cos \Phi & \ldots & 0 & 2 & -2cos \Phi & \ldots & 0 & R_x, R_y & xz, yz \\ \Delta_g & 2 & 2cos 2\Phi & \ldots & 0 & 2 & 2cos 2\Phi & \ldots & 0 & & x^2 - y^2, xy \\ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & & \\ \Sigma_u^+ & 1 & 1 & \ldots & 1 & -1 & -1 & \ldots & -1 & z & \\ \Sigma_u^- & 1 & 1 & \ldots & -1 & -1 & -1 & \ldots & 1 & & \\ \Pi_u & 2 & 2cos \Phi & \ldots & 0 & -2 & 2cos \Phi & \ldots & 0 & x, y & \\ \Delta_u & 2 & 2cos 2\Phi & \ldots & 0 & -2 & -2cos 2\Phi & \ldots & 0 & & \\ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & & \end{array} \label{30.16}$
## $$S_n$$ groups
$\begin{array}{l|c|l|l} S_4 & E \: \: \: \: \: S_4 \: \: \: \: \: C_2 \: \: \: \: \: S_4^3 & & \\ \hline A & 1 \: \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: \: 1 & R_z & x^2 + y^2, z^2 \\ B & 1 \: \: \: \: -1 \: \: \: \: \: \: \: \: 1 \: \: \: \: -1 & z & x^2 - y^2, xy \\ E & \begin{Bmatrix} 1 & i & -1 & -i \\ 1 & -i & -1 & i \end{Bmatrix} & x, y, R_x, R_y & xz, yz \end{array} \label{30.17}$
$\begin{array}{l|c|l|l} S_6 & E \: \: \: \: \: C_3 \: \: \: \: \: C_3^2 \: \: \: \: \: i \: \: \: \: \: S_6^5 \: \: \: \: \: S_6 & & c=e^{2\pi/3} \\ \hline A_g & 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 & R_z & x^2 + y^2, z^2 \\ E_g & \begin{Bmatrix} 1 & \: \: c & \: \: c^* & \: \: 1 & \: \: c & \: \: c^* \\ 1 \: \: & \: \: c^* & \: \: c & \: \: 1 & \: \: c^* & \: \: c \end{Bmatrix} & R_x, R_y & x^2 - y^2, xy, xz, yz \\ A_u & 1 \: \: \: \: \: \: 1 \: \: \: \: \: \: 1 \: \: \: \: -1 \: \: \: \: -1 \: \: \: \: \: -1 & z & \\ E_u & \begin{Bmatrix} 1 & c & c^* & -1 & -c & -c^* \\ 1 & c^* & c & -1 & -c^* & -c \end{Bmatrix} & x, y & \end{array} \label{30.18}$
## Cubic groups
$\begin{array}{l|c|l|l} T & E \: \: \: 4C_3 \: \: \: 4C_3^2 \: \: \: 3C_2 & & c=e^{2\pi/3} \\ \hline A & 1 \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: 1 & & x^2 + y^2, z^2 \\ E & \begin{Bmatrix} 1 & c & c^* & 1 \\ 1 & c* & c & 1 \end{Bmatrix} & & 2z^2 - x^2 - y^2, x^2 - y^2 \\ T & 3 \: \: \: \: \: 0 \: \: \: \: \: \: \: 0 \: \: \: -1 & R_x, R_y, R_z, x, y, z & xy, xz, yz \end{array} \label{30.19}$
$\begin{array}{l|ccccc|l|l} T_d & E & 8C_3 & 3C_2 & 6S_4 & 6\sigma_d & & \\ \hline A_1 & 1 & 1 & 1 & 1 & 1 & & x^2 + y^2, z^2 \\ A_2 & 1 & 1 & 1 & -1 & -1 & & \\ E & 2 & -1 & 2 & 0 & 0 & & 2z^2 - x^-2 - y^2, x^2 - y^2 \\ T_1 & 3 & 0 & -1 & 1 & -1 & R_x, R_y, R_z & \\ T_2 & 3 & 0 & -1 & -1 & 1 & x, y, z & xy, xz, yz \end{array} \label{30.20}$
## Direct product tables
### For the point groups O and T$$_d$$ (and O$$_h$$)
$\begin{array}{llllll} & \boldsymbol{A_1} & \boldsymbol{A_2} & \boldsymbol{E} & \boldsymbol{T_1} & \boldsymbol{T_2} \\ \boldsymbol{A_1} & A_1 & A_2 & E & T_1 & T_2 \\ \boldsymbol{A_2} & & A_1 & E & T_2 & T_1 \\ \boldsymbol{E} & & & A_1 + A_2 + E & T_1 + T_2 & T_1 + T_2 \\ \boldsymbol{T_1} & & & & A_1 + E + T_1 + T_2 & A_2 + E + T_1 +T_2 \\ \boldsymbol{T_2} & & & & & A_1 + E + T_1 + T_2 \end{array} \label{30.21}$
### For the point groups D$$_4$$, C$$_{4v}$$, D$$_{2d}$$ (and $$D_{4h} = D_4 \otimes C_i$$)
$\begin{array}{llllll} & \boldsymbol{A_1} & \boldsymbol{A_2} & \boldsymbol{B_1} & \boldsymbol{B_2} & \boldsymbol{E} \\ \boldsymbol{A_1} & A_1 & A_2 & B_1 & B_2 & E \\ \boldsymbol{A_2} & & A_1 & B_2 & B_1 & E \\ \boldsymbol{B_1} & & & A_1 & A_2 & E \\ \boldsymbol{B_2} & & & & A_1 & E \\ \boldsymbol{E} & & & & & A_1 + A_2 + B_1 + B_2 \end{array} \label{30.22}$
### For the point groups D$$_3$$ and C$$_{3v}$$
$\begin{array}{llll} & \boldsymbol{A_1} & \boldsymbol{A_2} & \boldsymbol{E} \\ \boldsymbol{A_1} & A_1 & A_2 & E \\ \boldsymbol{A_2} & & A_1 & E \\ \boldsymbol{E} & & & A_1 + A_2 + E \end{array} \label{30.23}$
### For the point groups D$$_6$$, C$$_{6v}$$ and D$$_{3h}^*$$
$\begin{array}{lllllll} & \boldsymbol{A_1} & \boldsymbol{A_2} & \boldsymbol{B_1} & \boldsymbol{B_2} & \boldsymbol{E_1} & \boldsymbol{E_2} \\ \boldsymbol{A_1} & A_1 & A_2 & B_1 & B_2 & E_1 & E_2 \\ \boldsymbol{A_2} & & A_1 & B_2 & B_1 & E_1 & E_2 \\ \boldsymbol{B_1} & & & A_1 & A_2 & E_2 & E_1 \\ \boldsymbol{B_2} & & & & A_1 & E_2 & E_1 \\ \boldsymbol{E_1} & & & & & A_1 + A_2 + E_2 & B_1 + B_2 + E_1 \\ \boldsymbol{E_2} & & & & & & A_1 + A_2 + E_2 \end{array} \label{30.24}$
$$^*$$in D$$_{3h}$$ make the following changes in the above table
$\begin{array}{ll} \text{In table} & \text{In D}_{3h} \\ A_1 & A_1' \\ A_2 & A_2' \\ B_1 & A_1'' \\ B_2 & A_2'' \\ E_1 & E'' \\ E_2 & E' \end{array} \label{30.25}$
1.30: Appendix B- Point Groups is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Claire Vallance via source content that was edited to conform to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
|
2022-07-04 22:09:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9559552669525146, "perplexity": 4572.508206440146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104496688.78/warc/CC-MAIN-20220704202455-20220704232455-00642.warc.gz"}
|
https://www.physicsforums.com/threads/lorentz-contraction-of-a-light-pulse.361444/
|
# Lorentz contraction of a light pulse
Suppose the earth, moon, and an extraterrestial observer(ET) are at the corner of an equilateral triangle. The earth observer points a laser at the moon and emits a powerful laser pulse for exactly .01 seconds. The 1,860 mile long pulse travels to the moon in about 1.3 seconds. The ET knows that the pulse is .01 seconds long because he has been monitoring the earth scientists' conversations, but the ET can't see the pulse in the vacuum of space. Suppose, however, there is a low density of dust particles in space that scatters some of the light in the direction of the ET(similar to the visualization of a searchlight beam on a foggy night). The ET will see a 1,862 mile long "blip" travel to the moon in 1.3 seconds.
Or will he?
The pulse is moving at the speed of light and the ET knows the pulse lasted for.01 seconds(1,862 miles long). Knowing that Lorentz contraction always shrinks an object in the direction of motion, the ET was expecting the Lorentz contrcted pulse to be shorter than 1,862 miles.What is the reason the light pulse is not Lorentz contracted? Is it because the pulse is not a material object and is therefore exempt from relativistic effects?
Suppose the earth, moon, and an extraterrestial observer(ET) are at the corner of an equilateral triangle. The earth observer points a laser at the moon and emits a powerful laser pulse for exactly .01 seconds. The 1,860 mile long pulse travels to the moon in about 1.3 seconds. The ET knows that the pulse is .01 seconds long because he has been monitoring the earth scientists' conversations, but the ET can't see the pulse in the vacuum of space. Suppose, however, there is a low density of dust particles in space that scatters some of the light in the direction of the ET(similar to the visualization of a searchlight beam on a foggy night). The ET will see a 1,862 mile long "blip" travel to the moon in 1.3 seconds.
Or will he?
The pulse is moving at the speed of light and the ET knows the pulse lasted for.01 seconds(1,862 miles long). Knowing that Lorentz contraction always shrinks an object in the direction of motion, the ET was expecting the Lorentz contrcted pulse to be shorter than 1,862 miles.What is the reason the light pulse is not Lorentz contracted? Is it because the pulse is not a material object and is therefore exempt from relativistic effects?
What is ET's relative motion to all this?
jtbell
Mentor
What is the reason the light pulse is not Lorentz contracted?
Length contraction is always relative to the proper length of an object: the length of the object in its rest frame (the inertial reference frame in which it is at rest). But a pulse of light has no rest frame! It moves at speed c in any inertial reference frame.
The length of the pulse does depend on the reference frame, nevertheless. First consider the rest frame of the source. Let the time interval that the source is "on" be $\Delta t_0$. Then the length of the pulse in this frame is $L_0 = c \Delta t_0$. We can't call this the "proper length" of the pulse, but I'm going to call it $L_0$ anyway for convenience.
Now consider what this looks like in a frame in which the source is moving. Suppose the source is moving in the +x direction, and it emits the light in that direction, too. In this frame, the time duration of the pulse is greater because of time dilation: $\Delta t = \gamma \Delta t_0$ where as usual
$$\gamma = \frac {1} {\sqrt {1 - v^2/c^2}}$$
Also suppose that in this frame, the source starts emitting when it's at x = 0. It stops emitting after time $\Delta t$. At this time, the front edge of the pulse is at $x_{front} = c \Delta t$, and the rear edge of the pulse is at $x_{rear} = v \Delta t$ because that's where the (moving) source is at that time. So the length of the pulse in this frame is
$$L = c \Delta t - v \Delta t = (c - v) \Delta t = (c - v) \gamma \Delta t_0$$
After some algebra we get
$$L = \sqrt { \frac {c - v} {c + v}} (c \Delta t_0) = \sqrt { \frac {c - v} {c + v}} L_0$$
So the length of the pulse does depend on the velocity of the source, but it's not the length-contraction relationship.
Last edited:
Dale
Mentor
2020 Award
If the ET is at rest wrt the other observers involved then it will measure the same length as they do.
FYI, in general I discourage the use of the time dilation and length contraction formulas. I think it is always best to use the full Lorentz transform equations. The length contraction and time dilation formulas then automatically pop out whenever appropriate simply by setting certain terms equal to 0.
The ET the earth and the moon are all at rest. The only thing moving is the light pulse.
|
2021-06-12 21:39:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5839507579803467, "perplexity": 472.9024714341815}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586390.4/warc/CC-MAIN-20210612193058-20210612223058-00561.warc.gz"}
|
https://cs.stackexchange.com/questions/90602/prove-that-a-is-non-regular-using-k-complexity-non-regularity-theorem
|
# Prove that A is non-regular using K-Complexity Non regularity theorem
Given $Y^A_{x,n}$= the nth string $y∈Σ^∗$ (in lex order) such that $xy∈A$ (if n such y exits). So what completes $x$ if adding $n$ such $y$'s brings us to an element in the set $A$
Given $A \subseteq \Sigma^*$ has the following property, then it is not regular
For every $c \in \mathbb{Z^+}$ there exits $x \in \Sigma^*$, and $n \in \mathbb{Z^+}$ s.t. $Y^A_{x,n}$ exits and K-Comp $C(Y^A_{x,n}) > c + log(n)$
We are trying to show weather given a DFA will it end in a final state after certain number of steps and this process will tell us if that is not a possibility since we cannot prove regularity just non-regularity
$A=\{0^{2n}1x∣n∈\mathbb{N}, x∈\{0,1\}^∗$, and $|x|=n\}$ Prove that A is non-regular using KCR
All we have to do is to pick a $x$ and $y$ s.t the concatenation $xy \in A$.
If I let $x = 0^{2n}1$ and let $y = x$, then we build the set $Y^A_{x}$ given the $x$ and $y$ from above $Y^A_{x,1} = 001(01)$ and next element in the set $Y^A_{x,2} = 00001(0101)$.
Here is where I get stuck since I know I need to show $C(Y^A_{x,1}) > c + log(1)$ would this suffice to show that because of that this language is not regular? What is the best way to split the language for $x$ and $y$
• You cannot "let $y=x$". In fact, $Y^A_{x,1} = 0^n$ and $Y^A_{x,2} = 0^{n-1}1$. – Yuval Filmus Apr 12 '18 at 18:16
• what abot $y = 1x$? – ZeroDay Fracture Apr 12 '18 at 18:17
• You cannot choose $y$ at all. – Yuval Filmus Apr 12 '18 at 18:17
Suppose that $A$ is regular. Then there exists a constant $c$ such that for all $x \in \Sigma^*$ and for all $n$ such that $Y^A_{x,n}$ exists, $C(Y^A_{x,n}) \leq \log n + c$.
Let us take $x = 0^{2m}1$. Then $Y^A_{x,n}$ exists as long as $n \leq 2^m$ – in fact, $Y^A_{x,n}$ is just the $n$th binary string of length $m$. Therefore $$C(\text{nth binary string of length m}) \leq \log n + c.$$ In particular, choosing $n=1$, we get $$C(0^m) \leq c.$$ Clearly this fails for large enough $m$. This contradiction shows that $A$ is not regular.
• ok I think I see what is being done. Why would n $\leq 2^m$ is this because of the length of $|x| = n$ limitation? – ZeroDay Fracture Apr 12 '18 at 18:21
• I think I get it, the enumerated string is at most the length of the index in the enumeration since it is kolmogorov random so $C(0^m) \leq c$. Do correct me if im mistaken. – ZeroDay Fracture Apr 13 '18 at 0:00
|
2019-07-17 15:00:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8930836915969849, "perplexity": 200.85318030497905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525312.3/warc/CC-MAIN-20190717141631-20190717163631-00323.warc.gz"}
|
https://www.physicsforums.com/threads/interpretation-of-a-tensor.833488/
|
# Interpretation of a tensor
1. Sep 20, 2015
### S. Moger
1. The problem statement, all variables and given/known data
$M= \begin{pmatrix} 2 & -1 & 0\\ -1 & 2 & -1\\ 0 & -1 & 2\\ \end{pmatrix}$
Compute $\frac{1}{6}\epsilon_{ijk}\epsilon_{lmn} M_{il} M_{jm} M_{kn}$ .
3. The attempt at a solution
I computed the result which is 4, by realizing that there are 36 non-zero levi-civita containing components to sum. Within this group, for each fixed {i,l} there are 4 possible sets of {in},{km}. As the order of the M_{ab}'s inside the product doesn't matter they can be rearranged to form sums of 2 each at a time. The positive contribution to the sum arises from the (now) three occurences of the product of the diagonal elements $M_{ii} M_{jj} M_{kk} = 2 \cdot 2 \cdot 2 = 8$ (not meant to be read as sums). The negative contributions arise from products of specific non-diagonal components multipied by a specific diagonal component. Because of interchangeability inside the product (or alternatively, in this case, symmetry) all get bundled in pairs of two. Each non-zero contribution (i.e. non-{1,3} permutation) then equals $2 \cdot (-1 \cdot -1 \cdot 2) = 4$, and there are three of them. So the final result turns out to be
$2 (3 \cdot 8 - 3 \cdot 4) = 24$, which divided by 6 leaves us with 4.
However, it feels like I'm missing something here (I'm not using the symmetry in any kind of crucial way). Is this the way you would solve this problem? Would it be possible to "see" the result by simply looking at the expression (i.e. interpreting it before doing the math)? Or for example by expressing the epsilons in deltas?
( Edit: I now see it's the determinant. But still, it isn't super apparent from just doing the math mindlessly (which is my type of thing until I feel I grasp the basics.)
Last edited: Sep 20, 2015
2. Sep 25, 2015
### Greg Bernhardt
Thanks for the post! This is an automated courtesy bump. Sorry you aren't generating responses at the moment. Do you have any further information, come to any new conclusions or is it possible to reword the post?
3. Sep 25, 2015
### davidmoore63@y
Apologies for not having Latex here. As you have discovered, your expression is just the determinant of the matrix. This is evident by expanding
eps(i,j,k) eps(l,m,n) M(i,l) M(j,m) M(k,n) as six lines, one for each sequence of possible i,j,k that result in non zero epsilon components:
= eps (1,2,3) eps(l,m,n) M(1,l) M(2,m) M(3,n)
+ eps (2,3,1) eps(l,m,n) M(2,l) M(3,m) M(1,n)
+ eps (3,1,2) eps(l,m,n) M(3,l) M(1,m) M(2,n)
+ three other lines, you get the idea.
But each line is equal to the determinant which is defined as eps(l,m,n) M(1,l) M(2,m) M(3,n). To see this you have to reorder the l,m,n indices in each line except the first.
Your fundamental question, why can't this be done by looking at the symmetries of the M matrix, is equivalent to asking: can the determinant of a symmetric matrix be quickly evaluated by looking at symmetries? Not as far as I know.
4. Sep 25, 2015
### S. Moger
Thanks, that's a very clear and nice explanation.
|
2017-12-12 05:19:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.781577467918396, "perplexity": 966.2465444191889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515165.6/warc/CC-MAIN-20171212041010-20171212061010-00779.warc.gz"}
|
https://crad.ict.ac.cn/EN/Y2016/V53/I5/1086
|
ISSN 1000-1239 CN 11-1777/TP
Journal of Computer Research and Development ›› 2016, Vol. 53 ›› Issue (5): 1086-1094.
### Solving Cost Prediction Based Search in Symbolic Execution
Liu Jingde1,3, Chen Zhenbang1, Wang Ji1,2
1. 1(College of Computer, National University of Defense Technology, Changsha 410073); 2(Science and Technology on Parallel and Distributed Processing Laboratory (College of Computer, National University of Defense Technology), Changsha 410073); 3(95835 PLA Troops, Bayingolin, Xinjiang 841700)
• Online:2016-05-01
Abstract: In symbolic execution, constraint solving needs a large proportion of execution time. The solving time of a constraint differs a lot with respect to the complexity, which happens a lot when analyzing the programs with complex numerical calculations. Solving more constraints within a specified time contributes to covering more statements and exploring more paths. Considering this feature, we propose a solving cost prediction based search strategy for symbolic execution. Based on the experimental data of constraint solving, we conclude an empirical formula to evaluate the complexity of constraints, and predict the solving cost of a constraint combined with historical solving cost data. The formula is used in our strategy to explore the paths with a lower solving cost with a higher priority. We have implemented our strategy in KLEE, a state-of-art symbolic executor for C, and carried out the experiments on the randomly selected 12 modules in GNU Scientific Library (GSL). The experimental results indicate that: in a same period, compared with the existing strategy, our strategy can explore averagely 24.34% more paths, without sacrificing the statement coverage; and our strategy can find more bugs. In addition, the time of using our strategy for finding same bugs decreases 44.43% in average.
CLC Number:
|
2022-10-03 13:52:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25476768612861633, "perplexity": 2656.0374298984398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00132.warc.gz"}
|
http://www.vista-survey.com/help/browse.dsb?view=app&ID=127
|
Can I modify the way the Average Score in the Analysis Report is calculated?
In the Analysis Report, Multiple Choice questions will contain an Average Score underneath the tabulated answers that looks something like this:
If you don't specify a value for each possible answer, Vista automatically assigns the values as 1, 2, 3, etc. The Average Score is then calculated using these values. In some cases, for example the case above, you may want to reverse the order of the values, or specify your own values. In the example above, we could specify the values as Very satisfied=5, Satisfied=4, ... Very Dissatisfied=1. After specifying the values the Average Score will reflect these new values, as follows:
For instructions on how to specify a value for answers, see Assigning a Value to Answers.
|
2017-12-16 10:56:56
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8691412806510925, "perplexity": 459.78166127910146}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948587577.92/warc/CC-MAIN-20171216104016-20171216130016-00309.warc.gz"}
|
https://nullmap.org/posts/computation/quick-sort/index.html
|
# Quick Sort
Julia makes it really easy to use the quick sort algorithm.
For array, A, with datatype Vector{K}, where K <: Any, we can use the in-place sorting function sort! with the alg flag with QuickSort.
sort!(A, alg=QuickSort)
In fact, by default Julia uses QuickSort for numerical array sorting and MergeSort for any others. More information on available sorting algorithms can be found in the Julia documentation for sorting.
|
2021-07-23 22:19:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19426803290843964, "perplexity": 5075.136787849623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150067.51/warc/CC-MAIN-20210723210216-20210724000216-00112.warc.gz"}
|
https://intelligencemission.com/free-electricity-on-the-weekend-free-electricity-light-bulb.html
|
The magnitude of G tells us that we don’t have quite as far to go to reach equilibrium. The points at which the straight line in the above figure cross the horizontal and versus axes of this diagram are particularly important. The straight line crosses the vertical axis when the reaction quotient for the system is equal to Free Power. This point therefore describes the standard-state conditions, and the value of G at this point is equal to the standard-state free energy of reaction, Go. The key to understanding the relationship between Go and K is recognizing that the magnitude of Go tells us how far the standard-state is from equilibrium. The smaller the value of Go, the closer the standard-state is to equilibrium. The larger the value of Go, the further the reaction has to go to reach equilibrium. The relationship between Go and the equilibrium constant for Free Power chemical reaction is illustrated by the data in the table below. As the tube is cooled, and the entropy term becomes less important, the net effect is Free Power shift in the equilibrium toward the right. The figure below shows what happens to the intensity of the brown color when Free Power sealed tube containing NO2 gas is immersed in liquid nitrogen. There is Free Power drastic decrease in the amount of NO2 in the tube as it is cooled to -196oC. Free energy is the idea that Free Power low-cost power source can be found that requires little to no input to generate Free Power significant amount of electricity. Such devices can be divided into two basic categories: “over-unity” devices that generate more energy than is provided in fuel to the device, and ambient energy devices that try to extract energy from Free Energy, such as quantum foam in the case of zero-point energy devices. Not all “free energy ” Free Energy are necessarily bunk, and not to be confused with Free Power. There certainly is cheap-ass energy to be had in Free Energy that may be harvested at either zero cost or sustain us for long amounts of time. Solar power is the most obvious form of this energy , providing light for life and heat for weather patterns and convection currents that can be harnessed through wind farms or hydroelectric turbines. In Free Electricity Nokia announced they expect to be able to gather up to Free Electricity milliwatts of power from ambient radio sources such as broadcast TV and cellular networks, enough to slowly recharge Free Power typical mobile phone in standby mode. [Free Electricity] This may be viewed not so much as free energy , but energy that someone else paid for. Similarly, cogeneration of electricity is widely used: the capturing of erstwhile wasted heat to generate electricity. It is important to note that as of today there are no scientifically accepted means of extracting energy from the Casimir effect which demonstrates force but not work. Most such devices are generally found to be unworkable. Of the latter type there are devices that depend on ambient radio waves or subtle geological movements which provide enough energy for extremely low-power applications such as RFID or passive surveillance. [Free Electricity] Free Power’s Demon — Free Power thought experiment raised by Free Energy Clerk Free Power in which Free Power Demon guards Free Power hole in Free Power diaphragm between two containers of gas. Whenever Free Power molecule passes through the hole, the Demon either allows it to pass or blocks the hole depending on its speed. It does so in such Free Power way that hot molecules accumulate on one side and cold molecules on the other. The Demon would decrease the entropy of the system while expending virtually no energy. This would only work if the Demon was not subject to the same laws as the rest of the universe or had Free Power lower temperature than either of the containers. Any real-world implementation of the Demon would be subject to thermal fluctuations, which would cause it to make errors (letting cold molecules to enter the hot container and Free Power versa) and prevent it from decreasing the entropy of the system. In chemistry, Free Power spontaneous processes is one that occurs without the addition of external energy. A spontaneous process may take place quickly or slowly, because spontaneity is not related to kinetics or reaction rate. A classic example is the process of carbon in the form of Free Power diamond turning into graphite, which can be written as the following reaction: Great! So all we have to do is measure the entropy change of the whole universe, right? Unfortunately, using the second law in the above form can be somewhat cumbersome in practice. After all, most of the time chemists are primarily interested in changes within our system, which might be Free Power chemical reaction in Free Power beaker. Free Power we really have to investigate the whole universe, too? (Not that chemists are lazy or anything, but how would we even do that?) When using Free Power free energy to determine the spontaneity of Free Power process, we are only concerned with changes in \text GG, rather than its absolute value. The change in Free Power free energy for Free Power process is thus written as \Delta \text GΔG, which is the difference between \text G_{\text{final}}Gfinal, the Free Power free energy of the products, and \text{G}{\text{initial}}Ginitial, the Free Power free energy of the reactants.
###### The solution to infinite energy is explained in the bible. But i will not reveal it since it could change our civilization forever. Transportation and space travel all together. My company will reveal it to thw public when its ready. My only hint to you is the basic element that was missing. Its what we experience in Free Power everyday matter. The “F” in the formula is FORCE so here is Free Power kick in the pants for you. “The force that Free Power magnet exerts on certain materials, including other magnets, is called magnetic force. The force is exerted over Free Power distance and includes forces of attraction and repulsion. Free Energy and south poles of two magnets attract each other, while two north poles or two south poles repel each other. ” What say to that? No, you don’t get more out of it than you put in. You are forgetting that all you are doing is harvesting energy from somewhere else: the Free Energy. You cannot create energy. Impossible. All you can do is convert energy. Solar panels convert energy from the Free Energy into electricity. Every second of every day, the Free Energy slowly is running out of fuel.
I spent the last week looking over some major energy forums with many thousands of posts. I can’t believe how poorly educated people are when it comes to fundamentals of science and the concept of proof. It has become cult like, where belief has overcome reason. Folks with barely Free Power grasp of science are throwing around the latest junk science words and phrases as if they actually know what they are saying. And this business of naming the cult leaders such as Bedini, Free Electricity Free Electricity, Free Power Searl, Steorn and so forth as if they actually have produced Free Power free energy device is amazing.
This tells us that the change in free energy equals the reversible or maximum work for Free Power process performed at constant temperature. Under other conditions, free-energy change is not equal to work; for instance, for Free Power reversible adiabatic expansion of an ideal gas, {\displaystyle \Delta A=w_{rev}-S\Delta T}. Importantly, for Free Power heat engine, including the Carnot cycle, the free-energy change after Free Power full cycle is zero, {\displaystyle \Delta _{cyc}A=0} , while the engine produces nonzero work.
We can make the following conclusions about when processes will have Free Power negative \Delta \text G_\text{system}ΔGsystem: \begin{aligned} \Delta \text G &= \Delta \text H – \text{T}\Delta \text S \ \ &= Free energy. 01 \dfrac{\text{kJ}}{\text{mol-rxn}}-(Free energy \, \cancel{\text K})(0. 022\, \dfrac{\text{kJ}}{\text{mol-rxn}\cdot \cancel{\text K})} \ \ &= Free energy. 01\, \dfrac{\text{kJ}}{\text{mol-rxn}}-Free energy. Free Power\, \dfrac{\text{kJ}}{\text{mol-rxn}}\ \ &= -0. Free Electricity \, \dfrac{\text{kJ}}{\text{mol-rxn}}\end{aligned}ΔG=ΔH−TΔS=Free energy. 01mol-rxnkJ−(293K)(0. 022mol-rxn⋅K)kJ=Free energy. 01mol-rxnkJ−Free energy. 45mol-rxnkJ=−0. 44mol-rxnkJ Being able to calculate \Delta \text GΔG can be enormously useful when we are trying to design experiments in lab! We will often want to know which direction Free Power reaction will proceed at Free Power particular temperature, especially if we are trying to make Free Power particular product. Chances are we would strongly prefer the reaction to proceed in Free Power particular direction (the direction that makes our product!), but it’s hard to argue with Free Power positive \Delta \text GΔG! Our bodies are constantly active. Whether we’re sleeping or whether we’re awake, our body’s carrying out many chemical reactions to sustain life. Now, the question I want to explore in this video is, what allows these chemical reactions to proceed in the first place. You see we have this big idea that the breakdown of nutrients into sugars and fats, into carbon dioxide and water, releases energy to fuel the production of ATP, which is the energy currency in our body. Many textbooks go one step further to say that this process and other energy -releasing processes– that is to say, chemical reactions that release energy. Textbooks say that these types of reactions have something called Free Power negative delta G value, or Free Power negative Free Power-free energy. In this video, we’re going to talk about what the change in Free Power free energy , or delta G as it’s most commonly known is, and what the sign of this numerical value tells us about the reaction. Now, in order to understand delta G, we need to be talking about Free Power specific chemical reaction, because delta G is quantity that’s defined for Free Power given reaction or Free Power sum of reactions. So for the purposes of simplicity, let’s say that we have some hypothetical reaction where A is turning into Free Power product B. Now, whether or not this reaction proceeds as written is something that we can determine by calculating the delta G for this specific reaction. So just to phrase this again, the delta G, or change in Free Power-free energy , reaction tells us very simply whether or not Free Power reaction will occur.
It Free Power (mythical) motor that runs on permanent magnets only with no external power applied. How can you miss that? It’s so obvious. Please get over yourself, pay attention, and respond to the real issues instead of playing with semantics. @Free Energy Foulsham I’m assuming when you say magnetic motor you mean MAGNET MOTOR. That’s like saying democratic when you mean democrat.. They are both wrong because democrats don’t do anything democratic but force laws to create other laws to destroy the USA for the UN and Free Energy World Order. There are thousands of magnetic motors. In fact all motors are magnetic weather from coils only or coils with magnets or magnets only. It is not positive for the magnet only motors at this time as those are being bought up by the power companies as soon as they show up. We use Free Power HZ in the USA but 50HZ in Europe is more efficient. Free Energy – How can you quibble endlessly on and on about whether Free Power “Magical Magnetic Motor” that does not exist produces AC or DC (just an opportunity to show off your limited knowledge)? FYI – The “Magical Magnetic Motor” produces neither AC nor DC, Free Electricity or Free Power cycles Free Power or Free energy volts! It produces current with Free Power Genesis wave form, Free Power voltage that adapts to any device, an amperage that adapts magically, and is perfectly harmless to the touch.
# If there are no buyers in LA, then you could take your show on the road. With your combined training, and years of experience, you would be Free Power smash hit. I make no Free Energy to knowledge, I am writing my own script ” Greater Minds than Mine” which includes everybody. My greatest feat in life is find Free Power warm commode, on time….. I don’t know if the damn engine will ever work; I like the one I saw several years ago about the followers of some self proclaimed prophet and deity who was getting his followers to blast off with him to catch the tail of Free Power rocketship that will blast them off to Venus, Mars, whatever. I think you’re being argumentative. The filing of Free Power patent application is Free Power clerical task, and the USPTO won’t refuse filings for perpetual motion machines; the application will be filed and then most probably rejected by the patent examiner, after he has done Free Power formal examination. Model or no model the outcome is the same. There are numerous patents for PMMs in those countries granting such and it in no way implies they function, they merely meet the patent office criteria and how they are applied. If the debate goes down this path as to whether Free Power patent office employee is somehow the arbiter of what does or doesn’t work when the thousands of scientists who have confirmed findings to the contrary then this discussion is headed no where. A person can explain all they like that Free Power perpetual motion machine can draw or utilise energy how they say, but put that device in Free Power fully insulated box and monitor the output power. Those stubborn old fashioned laws of physics suggest the inside of the box will get colder till absolute zero is reached or till the hidden battery/capacitor runs flat. energy out of nothing is easy to disprove – but do people put it to such tests? Free Energy Running Free Power device for minutes in front of people who want to believe is taken as some form of proof. It’s no wonder people believe in miracles. Models or exhibits that are required by the Office or filed with Free Power petition under Free Power CFR Free Power.
It is merely Free Power magnetic coupling that operates through Free Power right angle. It is not Free Power free energy device or Free Power magnetic motor. Not relevant to this forum. Am I overlooking something. Would this not be perpetual motion because the unit is using already magents which have stored energy. Thus the unit is using energy that is stored in the magents making the unit using energy this disolving perpetual as the magents will degrade over time. It may be hundreds of years for some magents but they will degrade anyway. The magents would be acting as batteries even if they do turn. I spoke with PBS/NOVA. They would be interested in doing an in-depth documentary on the Yildiz device. I contacted Mr. Felber, Mr. Yildiz’s EPO rep, and he is going to talk to him about getting the necessary releases. Presently Mr. Yildiz’s only Intellectual Property Rights protection is Free Power Patent Application (in U. S. , Free Power Provisional Patent). But he is going to discuss it with him. Mr. Free Electricity, then we do agree, as I agree based on your definition. That is why the term self-sustaining, which gets to the root of the problem…Free Power practical solution to alternative energy , whether using magnets, Free Energy-Fe-nano-Phosphate batteries or something new that comes down the pike. Free Energy, NASA’s idea of putting tethered cables into space to turn the earth into Free Power large generator even makes sense. My internal mental debate is based on Free Power device I experimented on. Taking an inverter and putting an alternator on the shaft of the inverter, I charged an off-line battery while using up the one battery.
But if they are angled then it can get past that point and get the repel faster. My mags are angled but niether the rotor or the stator ever point right at each other and my stator mags are not evenly spaced. Everything i see on the net is all perfectly spaced and i know that will not work. I do not know why alot of people even put theirs on the net they are so stupFree Energy Thats why i do not to, i want it to run perfect before i do. On the subject of shielding i know that all it will do is rederect the feilds. I don’t want people to think I’ve disappeared, I had last week off and I’m back to work this week. I’m stealing Free Power little time during my break to post this. Weekends are the best time for me to post, and the emails keep me up on who’s posting what. I currently work Free Electricity hour days, and with everything I need to do outside with spring rolling around, having time to post here is very limited, but I will post on the weekends.
Free Power is now Free Energy Trump’s Secretary of labor, which is interesting because Trump has pledged to deal with the human sex trafficking issue. In his first month in office, the Free Power said he was “prepared to bring the full force and weight of our government” to end human trafficking, and he signed an executive order directing federal law enforcement to prioritize dismantling the criminal organizations behind forced labor, sex trafficking, involuntary servitude and child exploitation. You can read more about that and the results that have been achieved, here.
|
2021-04-21 14:45:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5633701086044312, "perplexity": 1163.7205170368386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039544239.84/warc/CC-MAIN-20210421130234-20210421160234-00558.warc.gz"}
|
https://jp.maplesoft.com/support/help/errors/view.aspx?path=GraphTheory%2FGeometricGraphs%2FGabrielGraph
|
GabrielGraph - Maple Help
GraphTheory[GeometricGraphs]
GabrielGraph
construct Gabriel graph
Calling Sequence GabrielGraph( P, opts )
Parameters
P - Matrix or list of lists representing set of points opts - (optional) one or more options as specified below
Options
• triangulation : list of three-element lists
Supply a previously computed Delaunay triangulation of P. The input must be a valid Delaunay triangulation in the format returned by ComputationalGeometry[DelaunayTriangulation]: a list of three-element lists of integers, representing triangles in a triangulation of P.
• vertices : list of integers, strings or symbols
Specifies the vertices to be used in the generated graph.
• weighted : true or false
If weighted=true, the result is a weighted graph whose edge weights correspond to the Euclidean distance between points. The default is false.
Description
• The GabrielGraph(P, opts) command returns the Gabriel graph for the point set P.
• The parameter P must be a Matrix or list of lists representing a set of points.
Definitions
• Let $P$ be a set of points in $n$ dimensions, let $p$ and $q$ be arbitrary points from $P$, and let $\mathrm{dist}\left(p,q\right)$ be the Euclidean distance between $p$ and $q$.
• The Gabriel graph the graph whose vertices correspond to points in $P$ and whose edges consist of those pairs $p$ and $q$ from $P$ for which the closed ball centered halfway between $p$ and $q$ with diameter equal to $\mathrm{dist}\left(p-q\right)$ contains no other points from $P$.
Formally, define the ball $B\left(p,q\right)$ to be those points $r\in P$ such that $\mathrm{dist}\left(r,\frac{p}{2}+\frac{q}{2}\right)\le \frac{\mathrm{dist}\left(p,q\right)}{2}$. The Gabriel graph has an edge between $p$ and $q$ if and only if $B\left(p,q\right)=\varnothing$.
• The Gabriel graph has the following relationships with other graphs:
The Euclidean minimum spanning tree on P is a subgraph of the Gabriel graph on P.
The nearest neighbor graph on P is a subgraph of the Gabriel graph on P.
The relative neighborhood graph on P is a subgraph of the Gabriel graph on P.
The Urquhart graph on P is a subgraph of the Gabriel graph on P.
The Gabriel graph on P is a subgraph of the Delaunay graph on P.
Examples
Generate a set of random two-dimensional points and draw a Gabriel graph.
> $\mathrm{with}\left(\mathrm{GraphTheory}\right):$
> $\mathrm{with}\left(\mathrm{GeometricGraphs}\right):$
> $\mathrm{points}≔\mathrm{LinearAlgebra}:-\mathrm{RandomMatrix}\left(60,2,\mathrm{generator}=0..100.,\mathrm{datatype}=\mathrm{float}\left[8\right]\right)$
${\mathrm{points}}{≔}\begin{array}{c}\left[\begin{array}{cc}{9.85017697341803}& {82.9750304386195}\\ {86.0670183749663}& {83.3188659363996}\\ {64.3746795546741}& {73.8671607639673}\\ {57.3670557294666}& {2.34399775883031}\\ {23.6234264844933}& {52.6873367387328}\\ {47.0027547350003}& {22.2459488367552}\\ {74.9213491558963}& {62.0471820220718}\\ {92.1513434709073}& {96.3107262637080}\\ {48.2319624355944}& {63.7563267144141}\\ {90.9441877431805}& {33.8527464913022}\\ {⋮}& {⋮}\end{array}\right]\\ \hfill {\text{60 × 2 Matrix}}\end{array}$ (1)
> $\mathrm{GG}≔\mathrm{GabrielGraph}\left(\mathrm{points}\right)$
${\mathrm{GG}}{≔}{\mathrm{Graph 1: an undirected unweighted graph with 60 vertices and 105 edge\left(s\right)}}$ (2)
> $\mathrm{DrawGraph}\left(\mathrm{GG}\right)$
References
Gabriel, Kuno Ruben; Sokal, Robert Reuven (1969), "A new statistical approach to geographic variation analysis", Systematic Biology, Society of Systematic Biologists, 18(3): 259–278. doi:10.2307/2412323.
Compatibility
• The GraphTheory[GeometricGraphs][GabrielGraph] command was introduced in Maple 2020.
|
2022-05-19 15:51:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 29, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9378518462181091, "perplexity": 1187.1140050361876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529538.2/warc/CC-MAIN-20220519141152-20220519171152-00306.warc.gz"}
|
https://www.physicsforums.com/threads/hilbert-spaces.954904/
|
# Hilbert Spaces
In texts treating Hilbert spaces, it's usually given as an example that "any finite dimensional unitary space is complete", but I've found no proof so far and failed prove it myself.
|
2021-10-24 11:57:47
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9139809608459473, "perplexity": 763.251686672195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00662.warc.gz"}
|
http://www.scienceforums.com/topic/13407-the-newest-latest-and-breaking-news-about-solar-energy/
|
# The newest, latest and breaking news about solar energy.
30 replies to this topic
### #1 Michaelangelica
Michaelangelica
Creating
• Members
• 7797 posts
Posted 26 December 2007 - 06:57 PM
$1 Dollar a Watt Solar. For Nanosolar of San Jose, California - and perhaps the rest of us - December 18, 2007 was an historic day. It was the day the company shipped the world’s lowest-cost solar panel. The company believes it can be the first solar manufacturer capable of profitably selling solar panels at 99 cents per watt. At that price solar energy becomes less expensive than coal, even when the cost of an entire system is considered. The US Department of Energy says a new coal plant costs about$2.10 per watt plus the cost of fuel and the cost of damaging emissions. There is no fuel cost with solar energy, nor any direct damage to the world.
The crew at Nanosolar must certainly be happy, but should the rest of us dance and cheer?
Be happy but the world is not yet powered by solar, nor will it be anytime soon. Those now pricey coal power plants operate 24/7. Solar power is still reliant on daylight.
Then again a lot of new coal-fired plants won’t have to be built, and for sunny, but windless, regions photovoltaic solar now becomes a viable choice for low cost renewable energy. Further, for windy areas, but with strong local opposition to wind energy, solar would be an option if land, rooftops or parking lots are available. Nobody protests against solar power plants.
So far nanotechnologies have produced cutting-edge energy products much as predicted.
Solar power is. . .
ENN: $1 Dollar a Watt Solar. ### #2 Michaelangelica Michaelangelica Creating • Members • 7797 posts Posted 31 December 2007 - 05:09 AM Panels start solar power 'revolution' The holy grail of renewable energy came a step closer yesterday as thousands of mass-produced wafer-thin solar cells printed on aluminium film rolled off a production line in California, heralding what British scientists called "a revolution" in generating electricity. The solar panels produced by a Silicon Valley start-up company, Nanosolar, are radically different from the kind that European consumers are increasingly buying to generate power from their own roofs. Printed like a newspaper directly on to aluminium foil, they are flexible, light and, if you believe the company, expected to make it as cheap to produce electricity from sunlight as from coal. Yesterday Nanosolar said its order books were full until mid-2009 and that a second factory would soon open in Germany where demand for solar power has rocketed. Britain was unlikely to benefit from the technology for some years because other countries paid better money for renewable electricity, it added. "Our first solar panels will be used in a solar power station in Germany," said Erik Oldekop, Nanosolar's manager in Switzerland. "We aim to produce the panels for 99 cents [50p] a watt, which is comparable to the price of electricity generated from coal. RESPECT - THE UNITY COALITION (The Respect Supporters Blog): Solar energy 'revolution' brings green power closer by John Vidal Just the yesterday I was reading discussion about the expense of solar power opposed to coal plants and how solar will never be as cheap as coal; and then I found out that Nanosolar, a Silicon Valley company funded by the founders of google, has announced that it has shipped it’s first solar panels. Why is this a big deal? Our product is defining in more ways I can enumerate here but includes: - the world’s first printed thin-film solar cell in a commercial panel product; - the world’s first thin-film solar cell with a low-cost back-contact capability; - the world’s lowest-cost solar panel - which we believe will make us the first solar manufacturer capable of profitably selling solar panels at as little as$.99/Watt;
- the world’s highest-current thin-film solar panel - delivering five times the current of any other thin-film panel on the market today and thus simplifying system deployment;
- an intensely systems-optimized product with the lowest balance-of-system cost of any thin-film panel - due to innovations in design we have included.
Breaking the $1 per watt barrier is important; that means that it is possible to build a solar system that is cheaper than a coal plant. Groundbreaking New Solar Panels ### #3 Michaelangelica Michaelangelica Creating • Members • 7797 posts Posted 02 January 2008 - 04:15 AM Carbon electrodes could slash cost of solar panels * 12:43 19 December 2007 * NewScientist.com news service Transparent electrodes created from atom-thick carbon sheets could make solar cells and LCDs without depleting precious mineral resources, say researchers in Germany. Solar cells, LCDs, and some other devices, must have transparent electrodes in parts of their designs to let light in or out. These electrodes are usually made from indium tin oxide (ITO) but experts calculate that there is only 10 years' worth of indium left on the planet, with LCD panels consuming the majority of existing stocks. Carbon electrodes could slash cost of solar panels - tech - 19 December 2007 - New Scientist Tech ### #4 DougF DougF Hypo Contributer • Members • 1229 posts Posted 02 January 2008 - 08:38 AM Thanks Michaelangelica, This is good news for us all. ### #5 Michaelangelica Michaelangelica Creating • Members • 7797 posts Posted 07 January 2008 - 12:19 AM In Germany, solar energy already provides 3 gigawatts of electricity, the equivalent of four large fossil fuel power stations. In 2003, the German government passed a law obliging energy companies to purchase solar energy from anyone who can produce it at nearly double the market price. The result? Homeowners and business flocked to buy photo voltaic (PV) cells and Germany's 300,000 PV cells now account for nearly 60% of the world's solar panels. This upsurge for demand for in solar technology has had the knock-on effect of stimulating research and development. Worldwide Sawdust:: Welcome to the Solar Century SunPower to Build 8 Megawatt Solar Power Plant in Spain Naturener Expands SunPower Deployment to Approximately 30 Megawatts January 04, 2008: 08:00 AM EST SAN JOSE, Calif., Jan. 4 /PRNewswire-FirstCall/ -- SunPower Corp. , a Silicon Valley-based manufacturer of high-efficiency solar cells, solar panels and solar systems, today announced that its Spanish subsidiary will engineer, procure equipment for and construct an approximately 8 megawatt solar electric power plant in the Extremadura region of Spain. SunPower to Build 8 Megawatt Solar Power Plant in Spain New solar energy collector so efficient it works at night The key to it all is nanotechnology. With this new technology, millions of extremely small twists of metal are molded into banks of "microantennas", which can be placed on almost any material, including plastic sheets. These spiral shaped "microantennas" are about 1/25 the width of a human hair. They are so small that they resonate from the interaction with the sun's infrared rays. This resonation can be translated into energy. During the day, the Earth soaks up a lot of this infrared energy, which is then radiated out at night -- enabling these microantennas to collect power even after the sun has set. New solar energy collector so efficient it works at night - Neoseeker News Article ### #6 DougF DougF Hypo Contributer • Members • 1229 posts Posted 07 January 2008 - 04:02 PM Michaelangelica, Thanks for posting this, this will be great once thay get it working. The solar infrared rays hitting the nanoantennas generate a current that has a frequency which oscillates ten thousand billion times a second -- which is far to great of an oscillation that standard electrical appliances can handle. But the teams working on it: ### #7 InfiniteNow InfiniteNow Suspended • Members • 9148 posts Posted 07 January 2008 - 06:06 PM Grrrrr.... All I wanted to know is on what substrate these were made... is it silicon? Is it some sort of thin film? What? What types of machines and chambers does it take to put these together? What are the materials? And they didn't say! Also, conventional solar panels are expensive to produce because the rely on high-grade silicon, which is becoming increasingly expensive. These new solar collectors can be manufactured for much less -- the research team aims "to make nanoantenna arrays as cheap as inexpensive carpet." But! It's not all worked out yet. Still pretty cool, though. ### #8 InfiniteNow InfiniteNow Suspended • Members • 9148 posts Posted 08 January 2008 - 09:54 AM I came across this today and thought I'd share. Cheers. While many people choose to install solar panels for environmental reasons, financial incentives such as tax credits, system rebates, low-interest loans and net metering programs can help tip the scales as you evaluate the costs involved. Some countries, such as Germany and Japan, have had attractive government incentives for some time now. In the U.S., the federal tax credit is capped at$2,000 but consumers can often offset the cost of installation with rebates from their local utility company or state tax credits – as well as the savings in energy costs which accrue over time.
To find out more about current incentives in the U.S. at the federal, state and local level:
DSIRE: DSIRE Home
For information on European incentives:
Green Power Market Development Group
For international incentives in countries outside the U.S. and Europe
Global Renewable Energy Policies and Measures
### #9 Michaelangelica
Michaelangelica
Creating
• Members
• 7797 posts
Posted 15 January 2008 - 06:57 AM
Looks like India will be the next to subsidise solar panels.
That is one big market!
The new and renewable energy ministry announced that it will provide financial assistance of Rs 12 per kilowatt hour in case of solar photovoltaic and Rs 10 per kilowatt hour in case of solar thermal power fed to the electricity grid. This move will be a shot in the arm for the industry and should result in the immediate action in creation of capacities.
A kicker for solar power
Saturday, January 12, 2008
It deserved to be front-page news, but barely got any ink at all outside the in dustry and business press. A Silicon Valley company, Nanosolar, began production last month of what it calls the "third wave" of photovoltaic technology, which turns sunlight into electricity.
. . .
The breakthrough here isn't in the basic technology, which has been around for about a decade, but rather in its production. What Nanosolar has done is to come up with a means of producing solar cells that is similar to printing a newspaper. Presses apply a layer of solar-absorbing nano (minute) particles to metal sheets at a rate of several hundred feet per minute. Popular Science magazine, describing the likely impact of the technology as "the new dawn of solar," declared it the "innovation of the year" in 2007.
. . .With support from the founders of Google and the Department of Energy, Nanosolar has built the country's largest manufacturing complex for solar panels in the country, with production of 430 megawatts of solar capacity annually. That's about half the capacity of the Unit 1 nuclear reactor at Three Mile Island.
Solar power shines- PennLive.com
Closing the gap on solar
Posted on January 12, 2008
About a third of the cost of a solar panel comes from silicon, and right now it’s only produced by seven companies–the Seven Silicon Sisters. But since the solar industry is so hot, as many as 50 new companies have started up, adding competition and increasing supply, which will put a downward pressure on prices. (One of the new companies, building a factory in China, thinks they can cut silicon prices nearly in half in seven years.)
Closing the gap on solar : SmartPower Blog
The Japanese were looking to open a plant at Lithgow NSW too?
Nanoantennas, a new, more efficient method for solar power?
Nanoantennas, a new, more efficient method for solar power?
### #10 Michaelangelica
Michaelangelica
Creating
• Members
• 7797 posts
Posted 02 March 2008 - 01:43 PM
Inexpensive Solar Cells Made More Efficient With New Sensitizers
ScienceDaily (Mar. 1, 2008) — Solar cell technology is marching ahead, though it still struggles with the two problems: efficiency and high production costs. In collaboration with Satoshi Uchida at the University of Tokyo, Michael Grätzel and his research group at the Swiss Federal Institute of Technology in Lausanne have now developed new sensitizers that should help an inexpensive type of solar cell to be more efficient. The sensitizers are based on the dye indoline.
Inexpensive Solar Cells Made More Efficient With New Sensitizers
With the introduction of the Federal Government rebates a 1kW grid connectable solar system will cost you between $3,900 and$4,300 (after the rebate) to install.
Solar Energy Solutions, Green Energy Solutions
### #11 Michaelangelica
Michaelangelica
Creating
• Members
• 7797 posts
Posted 05 March 2008 - 01:44 AM
Is wind solar energy?
I guess I can sneak it in?
For about $1,500 (delivered)for 1 Kw this looks the best home DIY value yet. Solar is about twice the price, then you do need to be somewhere windy.(seaside?) ENERGISTAR - Miniwind Futurenergy Kyocera solar wind turbines generators hydro water turbines grid tie inverters renewable alternative energy systems ### #12 Michaelangelica Michaelangelica Creating • Members • 7797 posts Posted 13 March 2008 - 05:20 AM Solar Space Power (13 March) Space is once again ‘the new frontier’ – this time in finding a solution to the Earth’s energy woes. Read more'Solar Space Power'» Watch VideoVideo 7:14 mins Win | RP It seems to me this money would be better spent on putting solar panels on houses, making people independent of big government and big business. But the USA military wants it and they usually get what they want Solar Space Power (13 March) Space is once again ‘the new frontier’ – this time in finding a solution to the Earth’s energy woes. Read more'Solar Space Power'» Watch VideoVideo 7:14 mins Win | RP Solar Space Power (Catalyst, ABC1) ### #13 Michaelangelica Michaelangelica Creating • Members • 7797 posts Posted 17 March 2008 - 04:37 AM Solar Power Paint Is 2.5 Years Away Posted on Mar 16, 2008 - 01:47 PM By: Adam Beazley According to a UK scientist Dr. Dave Worsley, commercial panels of architectural steel, painted with special solar-power paint capable of generating electricity should be available in as little as two and a half years. Dr Worsley and Dr Trystan Watson of Swansea University have developed a new paint based on dye-sensitized solar cells. This new solar paint is a result of previous research into different ways of preventing metal buildings from degrading due to the elements. Dr. Worsley describes the idea as “a collision between two existing technologies – one for generating electricity and one for applying paint to steel.” Neutral Existence: Solar Power Paint Is 2.5 Years Away ### #14 Michaelangelica Michaelangelica Creating • Members • 7797 posts Posted 10 April 2008 - 06:16 PM Not only does Australia have perfect conditions for solar technology, but it has also produced some of the world's best photovoltaic engineers. Last year Zhengrong Shi, a former student at the University of NSW became the richest man in China with his solar cell company Suntech. It is Shi's former teacher Martin Green who currently holds the world record for the most efficient silicon solar cells. . . . At the moment in Australia you can receive an$8,000 rebate for installing solar cells in your home or business. Another great bonus of these cells is their 25-year warranty. What else but a saucepan has that kind of guarantee?
. . .
Australia should look to countries like Germany for direction. Professor Green explains that their rebate scheme isn't a one-off payment like Australia's but a continued payment per unit of electricity, placed back into the power grid. Essentially you can continue to make money from your solar system for the lifetime of the cells
. . .
However, times they are a changing and the federal government's announcement for plans to build Australia's largest solar power station in South Australia is a step in the right direction,
Science Show - 5April2008 - Solar cells
is battery technology and waste pollution/disposal a problem/cost?
What happens when German Power Companies get sick of just providing "back up" power for everyone with solar panels?
### #15 Michaelangelica
Michaelangelica
Creating
• Members
• 7797 posts
Posted 19 May 2008 - 11:19 PM
New World Record For Efficiency For Solar Cells; Inexpensive To Manufacture
ScienceDaily (May 17, 2008) — Physicist Bram Hoex and colleagues at Eindhoven University of Technology, together with the Fraunhofer Institute in Germany, have improved the efficiency of an important type of solar cell from 21.9 to 23.2 percent (a relative improvement of 6 per cent). This new world record is being presented on Wednesday May 14 at a major solar energy conference in San Diego
New World Record For Efficiency For Solar Cells; Inexpensive To Manufacture
Do you think all this "latest greatest" new technology is making people wait until installing the 'best' system?
### #16 InfiniteNow
InfiniteNow
Suspended
• Members
• 9148 posts
Posted 20 May 2008 - 08:26 AM
Do you think all this "latest greatest" new technology is making people wait until installing the 'best' system?
While it's certainly possible that there are a few out there for whom this is a good enough reason to wait, I'd suggest that the vast majority of delay is caused by the huge front end cost of installation. Nearly everyone wants these things, but very few can afford them.
It's hard to justify a \$15K investment which will take 10-20 years to pay itself back when you're struggling to put food on the table today, ya know?
### #17 Michaelangelica
Michaelangelica
Creating
• Members
• 7797 posts
Posted 13 June 2008 - 02:07 AM
I think a lot of people are waiting for the price to come down;
& technology and government subsidies to catch up.
Every day it seems there is a 'new' technology heralded.
Sometimes years or decades before its practical application. "False Hope"?
Government subsides should be targeted to the poor, disabled and disadvantaged.
Solar-power paint lets you generate as you decorate
* 13:59 07 March 2008
* NewScientist.com news service
* Michael Marshall
A lick of solar-power paint could see the roofs and walls of warehouses and other buildings generate electricity from the sun, if research by UK researchers pays off.
The scientists are developing a way to paint solar cells onto the steel sheets commonly used to clad large buildings.
Steel sheets are painted rapidly in steel mills by passing them through rollers. A consortium led by Swansea University, UK, hopes to use that process to cover steel sheets with a photovoltaic paint at up to 40 square metres per minute.
The paint will be based on dye-sensitised solar cells. Instead of absorbing sunlight using silicon like conventional solar panels, they use dye molecules attached to particles of the titanium dioxide pigment used in paints.
That gives an energy boost to electrons, which hop from the dye into a layer of electrolyte. This then transfers the extra energy into a collecting circuit, before the electrons cycle back to the dye.
While less efficient than conventional cells, dye-based cells do not require expensive silicon, and can be applied as a liquid paste.
Action HOPE: TECHNOLOGICAL ADVANCES
|
2020-07-15 08:01:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.183419331908226, "perplexity": 3944.7564118148803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657163613.94/warc/CC-MAIN-20200715070409-20200715100409-00136.warc.gz"}
|
http://mathhelpforum.com/statistics/141165-help-w-statistics-homework-please.html
|
# Math Help - Help w/Statistics Homework Please :(
1. ## Help w/Statistics Homework Please :(
These questions are giving me a hard time. Any help is appreciated. Thanks!!
1: The weight of corn chips dispensed into a 48 ounce bag by the dispensing machine has been identified as possessing a normal distribution with a mean of 48.5 ounces and a standard deviation of 0.2 ounces. What proportion of the 48 ounce bags contains more than the advertised 48 ounces of chips?
2: Before a new phone system was installed, the amount a company spent on personal calls followed a normal distribution with an average of $900 per month and a standard deviation of$50 per month. Refer to such expenses as PCE'S (personal call expenses). Using the distribution above, what is the probability that during a randomly selected month PCE'S were between $775.00 and$990.00?
3: Farmers often sell fruits and vegetables at roadside stands during the summer. One such roadside stand has daily demand for tomatoes that is approximately normal distributed with a mean of 134 tomatoes and a standard deviation of 30 tomatoes. How many tomatoes must be available on any given day so that there is only a 1.5% chance that all tomatoes will be sold?
4: Transportation officials tells us that 70% of drivers wear seat belts while driving. Find the probability that more than 579 drivers in a sample of 800 drivers wear seat belts. Hint: This is binomial distribution. Use normal distribution approximations.
2. Help comes in the form of some questions...
Were there some that did NOT give you problems? It woudl be good to see what sorts of things your can understand well. This will provide a foundation for moving on.
You will HAVE to calculate a Z-Score. This is an unavoidable skill. Your calculator may do it for you. Can you do it?
3. Question number one i'm pretty sure i did right:
P(x>48) = P(z>(48.5-48/2) = P(z>2.5) = .9938
Double checked w/TI83 normalcdf(48,E99,48.5,0.2) = .9938
I understand a lot of these problems, but these four are the ones giving me a hard time. I have always had trouble with word problems. If the problem is set up for me i.e. mu=48.5 SD= 0.2 P(x>48) i would be fine.
4. Originally Posted by Fijilight08
Question number one i'm pretty sure i did right:
P(x>48) = P(z>(48.5-48/2) = P(z>2.5) = .9938
There is quite a bit wrong with this. You should take more time with your notation and write what you mean.
P(x>48) -- This is fine.
P(z>(48.5-48/2) -- This is not fine. You have overlooked your algebra. There is a reason you studied the "Order of Operations". Your subtraction is in the wrong order and you have a typo. It should be this:
P(z>((48-48.5)/0.2)
P(z>2.5) - This is obviously wrong. How can the value be greater than zero when the value of interest is below the mean? It should be this:
P(z>-2.5)
P(z>2.5) = .9938 -- This is also very obviously wrong. p(z>2.5) should be a very small number. How did you get almost 1? I think it obvious that you understood what you were writing, you just didn't share it with the rest of us.
Please be careful with the notation.
I understand a lot of these problems, but these four are the ones giving me a hard time. I have always had trouble with word problems. If the problem is set up for me i.e. mu=48.5 SD= 0.2 P(x>48) i would be fine.
As far as setting up the problems and doing fine if someone sets them up for you...well...setting them up is the interesting part that you should be learning. Work on that skill. You'll get it.
|
2016-05-24 07:11:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5771858096122742, "perplexity": 836.56475827278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049270134.8/warc/CC-MAIN-20160524002110-00162-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://pvlib-python.readthedocs.io/en/stable/reference/generated/pvlib.temperature.faiman.html
|
pvlib.temperature.faiman#
pvlib.temperature.faiman(poa_global, temp_air, wind_speed=1.0, u0=25.0, u1=6.84)[source]#
Calculate cell or module temperature using the Faiman model.
The Faiman model uses an empirical heat loss factor model 1 and is adopted in the IEC 61853 standards 2 and 3.
Usage of this model in the IEC 61853 standard does not distinguish between cell and module temperature.
Parameters
• poa_global (numeric) – Total incident irradiance [W/m^2].
• temp_air (numeric) – Ambient dry bulb temperature [C].
• wind_speed (numeric, default 1.0) – Wind speed in m/s measured at the same height for which the wind loss factor was determined. The default value 1.0 m/s is the wind speed at module height used to determine NOCT. [m/s]
• u0 (numeric, default 25.0) – Combined heat loss factor coefficient. The default value is one determined by Faiman for 7 silicon modules. $$\left[\frac{\text{W}/{\text{m}^2}}{\text{C}}\right]$$
• u1 (numeric, default 6.84) – Combined heat loss factor influenced by wind. The default value is one determined by Faiman for 7 silicon modules. $$\left[ \frac{\text{W}/\text{m}^2}{\text{C}\ \left( \text{m/s} \right)} \right]$$
Returns
numeric, values in degrees Celsius
Notes
All arguments may be scalars or vectors. If multiple arguments are vectors they must be the same length.
References
1
Faiman, D. (2008). “Assessing the outdoor operating temperature of photovoltaic modules.” Progress in Photovoltaics 16(4): 307-315.
2
“IEC 61853-2 Photovoltaic (PV) module performance testing and energy rating - Part 2: Spectral responsivity, incidence angle and module operating temperature measurements”. IEC, Geneva, 2018.
3
“IEC 61853-3 Photovoltaic (PV) module performance testing and energy rating - Part 3: Energy rating of PV modules”. IEC, Geneva, 2018.
|
2022-06-26 06:16:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2778126001358032, "perplexity": 8956.35578164375}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037089.4/warc/CC-MAIN-20220626040948-20220626070948-00085.warc.gz"}
|
https://math.stackexchange.com/questions/3329043/swedish-mathematical-competition-problem-for-pre-tertiary-students
|
# Swedish mathematical competition problem for pre-tertiary students
In a class of students, one student is given a bag of 2014 coins whilst none of the other students in the class receive any coins at all. Every time two students meet, if they have an even amount of coins together then they split the coins equally, while if they have an odd number of coins together then they put one coin in the class's cash register and then split the coins equally. After a long time with alot of trades, it so happens to be that every single coin is in the class's cash register. What is the least number possible of students in the class?
Before reading my solution feel free to try solving the problem by yourself. Also, the questions I have regarding this problem is: Do there exists more sophisticated maths that could be used such as recursions, sets etc. to shorten the proof and make it look nicer? Any improvments overall on my solution? Any suggestions for proofs on the second part? Plesase keep those questions in mind as you proceed to read through my solution.
.
.
.
.
.
My solution:
Consider the case where every student in the class have a positive number of coins, we find that there is no way of reaching $$0$$ coins for any of the students in this case, for if we have two distict students $$A$$ and $$B$$ with $$a$$ and $$b$$ number of coins respectively with $$a,b \geq 1$$ it will result in $$a+b \geq 2$$ and so both $$A$$ and $$B$$ will be left with $$\geq 1$$ number of coins after the trade. Thus the condition that no students have any money left cannot be fulfilled.
We understand that at least one of the students must have $$0$$ coins. We investigate further around a student with $$0$$ coins and since trading is a central part of the problem, naturally we want to see what happens when a student (or two) with $$0$$ coins makes a trade. Let $$N$$ denote the student with $$0$$ coins. Now if $$N$$ trades with someone who has $$\geq 2$$ coins, we find that both $$N$$ and the other student will leave with $$\geq 1$$ coins and we are up to no good. However, if $$N$$ trades with someone who has exactly $$1$$ coin, we find that both $$N$$ and the other student will be left with $$0$$ coins after the trade(!). We conclude that in order for the number of students with $$0$$ coins to increase in the class without adding more students to the class, the only way of doing so is having a student with $$0$$ coins, $$N$$, trade with someone who has $$1$$ coin.
Unavoidably, we then must have students reaching $$1$$ coin in the process, and so we aim our investigation towards that goal. Also note that, since the students split the coins equally, if we have one student left with $$1$$ coin after a trade then there exist another student with $$1$$ coin aswell. Now trivially, I would argue that the fastest and most sufficient way of reaching two students with $$1$$ coin each under the given restrictions, is just dividing the same student's amount of coins by two over and over again by adding more students to the class. With that as our choice of strategy combined with some numerical investigation, we are able to conjecture the following:
Conjecture: "Under the given conditions and initially starting with $$c$$ coins for some integer $$c$$, the number of students it takes for two students to be able to leave a trade with $$1$$ coin each is in total only one student short of being a sufficient solution for the problem"
For example; for the picture below, let the capital letter denote a distinguishable student and let the number describe the amount of coins that student has. Initially, student $$A$$ starts with $$10$$ coins and is required to trade with $$3$$ other students, namely $$B,C$$ and $$D$$, in order for $$A$$ to reach $$1$$ coin. We see that is takes a total of $$4$$ students to reach two students who both have $$1$$ coin each and by our conjecture, $$5$$ students will be enough and sufficient as a solution to the problem.
Proof of conjecture: That one student we are short of is $$N$$. $$N$$ has $$0$$ coins and will only be involved whenever a student has $$1$$ coin, $$N$$ will then trade with that student to bring it down to $$0$$ coins. Let student $$A$$ initially start with all the $$A_0$$ coins. After the first trade, $$A$$ has $$A_1$$ coins left. For each trade the coins is reduced all the way until $$A$$ has $$A_r$$ coins left which is equal to $$1$$ coin. It is clear to see that student $$A$$ required $$r$$ additional students with $$0$$ coins each to trade with in order for $$A$$ to reach $$1$$ coin. Note that each and every one of the $$r$$ students have a different amount of coins left and it can only be one of the same amounts of coins that $$A$$ once had after trade $$A_i$$, $$1 \leq i \leq r$$. With $$B_i$$ denote the student that is left with $$A_i$$ coins, for $$1 \leq i \leq r$$. If we unravel it all backwards, after trading with $$N$$, both $$A_r$$ and $$B_r$$ now have $$0$$ coins each. Now $$B_{r-1}$$ will require as many trades as $$A_{r-1}$$ required to reach $$1$$ coin, which was one student with $$0$$ coins. Since both $$A_r$$ and $$B_r$$ currently have $$0$$ coins, $$B_{r-1}$$ can be brought down to $$0$$ coins aswell. Next, in order for $$B_{r-2}$$ to reach $$1$$ coin, we need two students with $$0$$ coins each, which we obviously have. Next we do $$B_{r-3}$$ and then $$B_{r-4}$$ and so on. As the "number of kids with $$0$$ coins required to trade with" increases with one at a time, and as the "number of kids with $$0$$ coins" available starts of with one more and increases at the same rate, we will be able to bring every student down to $$0$$ coins eventually and hence we have proved our conjecture. □
Now we only have to find the answer to "what is the minimum amount of students needed in order for two of them to reach $$1$$ coin". It is not hard to see that it requires $$1$$ trade from $$2$$ coins, $$1$$ trade from $$3$$ coins but $$2$$ trades from $$4$$ coins ... $$2$$ trades from $$5$$ coins, $$2$$ trades from $$6$$ coins, $$2$$ trades from $$7$$ coins but $$3$$ trades from $$8$$. If we write it up, we see that if the initial amount of money is $$c$$, for $$2^n \leq c \lt 2^{n+1}$$, then $$n$$ additional students are required from the start which puts us at: $$n +$$ "The initial student with all the money" $$+ N$$ $$=$$ $$n+2$$ students in total. (My proof of this part is not that great and even "wordier" than the first one so I will leave it out)
As $$c$$ is $$2014$$ in this problem and we have $$2^{10} \leq 2014 \lt 2^{10+1}$$, therefore the least number of students possible is $$10 +2 = 12$$, and we are done.
My questions: Do there exists more sophisticated maths that could be used such as recursions, sets etc. to shorten the proof and make it look nicer? Any improvements overall on my solution? Any suggestions for proofs on the second part?
|
2021-12-05 19:38:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 116, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7420417666435242, "perplexity": 287.6571621000788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363216.90/warc/CC-MAIN-20211205191620-20211205221620-00038.warc.gz"}
|
https://www.semanticscholar.org/paper/Linear-Stability-Analysis-of-Differentially-New-for-Karino-Eriguchi/0e769bf8a4722d0cb53600766015f14ec9070efc
|
# Linear Stability Analysis of Differentially Rotating Polytropes: New Results for the m = 2 f-Mode Dynamical Instability
@article{Karino2003LinearSA,
title={Linear Stability Analysis of Differentially Rotating Polytropes: New Results for the m = 2 f-Mode Dynamical Instability},
author={Shigeyuki Karino and Yoshiharu Eriguchi},
journal={The Astrophysical Journal},
year={2003},
volume={592},
pages={1119-1123}
}
• Published 16 April 2003
• Physics
• The Astrophysical Journal
We have studied the f-mode oscillations of differentially rotating polytropes by making use of the linear stability analysis. We found that the critical values of T/|W| where the dynamical instability against the m = 2 f-mode oscillations sets in decrease down to T/|W| ~ 0.20 as the degree of differential rotation becomes higher. Here m is an azimuthal mode number and T and W are the rotational energy and gravitational potential energy, respectively. This tendency is almost independent of the…
23 Citations
## Figures from this paper
Unstable normal modes of low T /W dynamical instabilities in differentially rotating stars
• Physics
• 2016
We investigate the nature of low $T/W$ dynamical instabilities in differentially rotating stars by means of linear perturbation. Here, $T$ and $W$ represent rotational kinetic energy and the
Determining the stiffness of the equation of state using low T/W dynamical instabilities in differentially rotating stars
We investigate the nature of low T/W dynamical instabilities in various ranges of the stiffness of the equation of state in differentially rotating stars. Here T is the rotational kinetic energy,
Accurate simulations of the dynamical bar-mode instability in full general relativity
• Physics
• 2007
We present accurate simulations of the dynamical bar-mode instability in full general relativity focusing on two aspects which have not been investigated in detail in the past, namely, on the
Vibrational Stability of Differentially Rotating Polytropic Stars
• Physics
• 2015
A method for computing the periods of radial and non-radial modes of oscillations to determine the vibrational stability of differentially rotating polytropic gaseous spheres is presented and
Effect of Mass Variation on the Radial Oscillations of Differentially Rotating and Tidally Distorted Polytropic Stars
• Physics
• 2015
A method is proposed to compute the eigenfrequencies of small adiabatic pseudo-radial modes of oscillations of differentially rotating and tidally distorted stellar models by taking into account the
On the role of corotation radius in the low T/W dynamical instability of differentially rotating stars
• Physics
• 2017
We investigate the nature of so-called low $T/W$ dynamical instability in a differentially rotating star by focusing on the role played by the corotation radius of the unstable oscillation modes. An
The Nature of Low T/|W| Dynamical Instabilities in Differentially Rotating Stars
• Physics
• 2003
Recent numerical simulations indicate the presence of dynamical instabilities of the f-mode in differentially rotating stars even at very low values of T/|W|, the ratio of kinetic to potential
Effects of differential rotation on the eigenfrequencies of small adiabatic barotropic modes of oscillations of polytropic models of stars
• Physics
• 2009
Mohan et al (1992 Astrophys. Space. Sci. 193 69) (1998 Indian J. Pure Appl. Math. 29 199) investigated the problem of equilibrium structures and periods of small adiabatic oscillations of
Rotational modes of relativistic stars: Numerical results
• Physics
• 2003
We study the inertial modes of slowly rotating, fully relativistic compact stars. The equations that govern perturbations of both barotropic and nonbarotropic models are discussed, but we present
Numerical evolution of secular bar-mode instability induced by the gravitational radiation reaction in rapidly rotating neutron stars
• Physics
• 2004
The evolution of a nonaxisymmetric bar-mode perturbation of rapidly rotating stars due to a secular instability induced by gravitational wave emission is studied in post-Newtonian simulations taking
## References
SHOWING 1-10 OF 35 REFERENCES
Nonaxisymmetric Dynamic Instabilities of Rotating Polytropes. I. The Kelvin Modes
• Physics
• 1998
We study the dynamic instabilities of rotating polytropes in the linear regime using an approximate Lagrangian technique and a more precise Eulerian scheme. We consider nonaxisymmetric modes with
Dynamical stability of differentially rotating masses – III. Additional numerical results and interpretation
The dynamical stability of differentially rotating, self-gravitating masses is analysed. Two types of shear instability are obtained. The first is caused by the principal mode which is caracterized
Dynamical instability of differentially rotating stars
• Physics
• 2002
We study the dynamical instability against bar-mode deformation of differentially rotating stars. We performed numerical simulation and linear perturbation analysis adopting polytropic equations of
R-mode oscillations of differentially and rapidly rotating Newtonian polytropic stars
• Physics
• 2001
For analysis of the r-mode oscillation of hot young neutron stars, it is necessary to consider the effect of differential rotation, because viscosity is not strong enough for differentially rotating
R-mode oscillations of rapidly rotating Newtonian stars: A new numerical scheme and its application to the spin evolution of neutron stars
We have developed a new numerical scheme to solve r-mode oscillations of {\it rapidly rotating polytropic stars} in Newtonian gravity. In this scheme, Euler perturbations of the density, three
The Bar-Mode Instability in Differentially Rotating Neutron Stars: Simulations in Full General Relativity
• Physics
• 2000
We study the dynamical stability against bar-mode deformation of rapidly spinning neutron stars with diUerential rotation. We perform fully relativistic three-dimensional simulations of compact stars
Non-Axisymmetric Shear Instabilities in Thick Accretion Disks
• Physics
• 1989
Recent results on the Papaloizou-Pringle non-axisymmetric instability in thick accretion disks is reviewed. Considerable work has been done on “slender” tori and annuli; these are systems whose
Relativistic simulations of rotational core collapse - II. Collapse dynamics and gravitational radiation
• Physics
• 2002
We have performed hydrodynamic simulations of relativistic rotational supernova core collapse in axisymmetry and have computed the gravitational radiation emitted by such an event. The Einstein
Non-axisymmetric dynamical instability of differentially rotating, self-gravitating masses
La stabilite dynamique de masses homogenes, autogravitantes, et en rotation differentielle est etudiee. En plus de l'instabilite classique du type barre, on obtient une nouvelle instabilite non
Gravitational waves from the dynamical bar instability in a rapidly rotating star
A rapidly rotating, axisymmetric star can be dynamically unstable to an $m=2$ bar'' mode that transforms the star from a disk shape to an elongated bar. The fate of such a bar-shaped star is
|
2022-01-23 04:09:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7162995934486389, "perplexity": 4099.591275692484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303956.14/warc/CC-MAIN-20220123015212-20220123045212-00014.warc.gz"}
|
http://mathhelpforum.com/differential-geometry/109840-need-help-proof-print.html
|
# Need help on this proof
• October 22nd 2009, 10:31 PM
binkypoo
Need help on this proof
I have to prove the following:
Suppose that functions f and g are continuous at $x=c\in (a,b)$ and f(c) > g(c). Prove there exists $\delta$ > 0 such that for all $x\in (a,b)$ with $|x-c|<\delta$, we have f(x) > g(x).
I really have no idea. any help on this would be great thanks!
• October 22nd 2009, 11:53 PM
el_manco
Hello
First you can proof this result.
If $f:(a,b)\longrightarrow R$ is a continuous function in $c\in (a,b)$ then and $f(c)>0$, then there exists $\delta>0$ such that $|x-c|<\delta, x\in (a,b)$ implies $f(x)>0$.
This follows inmediately from continuous definition; taking $\epsilon=f(x)$ there exists $\delta>0$ such that $|x-c|<\delta, x\in (a,b)$ implies:
$|f(x)-f(c)|-f(c)\quad \Rightarrow\quad f(x)>0$
Then, for your exercise, apply this result to the funcion $f-g$. Note that the difference of continuous function is continuous.
Best regards.
• October 25th 2009, 08:18 PM
binkypoo
[quote=el_manco;389581]taking $\epsilon=f(x)$
do you mean $\epsilon=f(c)$??
• October 25th 2009, 11:40 PM
el_manco
[quote=binkypoo;391521]
Quote:
Originally Posted by el_manco
taking $\epsilon=f(x)$
do you mean $\epsilon=f(c)$??
Yes, of course. I meant $\epsilon=f(c)$. Sorry for the typo.
Best regards.
|
2016-05-31 16:24:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9546003937721252, "perplexity": 621.7177852750074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051417337.23/warc/CC-MAIN-20160524005657-00081-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://www.juliahomotopycontinuation.org/examples/tritangents/
|
Tritangents
Using HomotopyContinuation.jl for computing tritangents of sextic curves
A complex sextic curve in $\mathbb{C}^3$ is the intersection of a cubic surface $\mathcal{C}\subset \mathbb{C}^3$ with a quadric $\mathcal{Q}\subset \mathbb{C}^3$; that is, there exist polynomials in three variables $C$ of degree 3 and $Q$ of degree 2, such that $\mathcal{C} = \{x\in\mathbb{C}^3 : C(x) = 0\}$ and $\mathcal{Q} = \{x\in\mathbb{C}^3 : Q(x) = 0\}$.
An interesting fact about such sextic curves is that they have 120 complex tritangents (at least almost all of them). This means that there are 120 affine complex planes, that touch the sextic at three points.
For instance, the sextic with $Q=x_3 - x_1x_2$ and $C=x_1^3+x_2^3+x_3^3 - 1$ together with one of its tritangents is shown below ($C$ is called the Fermat cubic). The tangent plane is depicted as a triangle. The red points are the points at which the plane touches the sextic.
In the last months, tritangents have been a popular research topic in the Nonlinear Algebra Group at MPI Leipzig: Türkü Özlüm Celik, Avinash Kulkarni, Yue Ren, Mahsa Sayyary Namin, Emre Sertöz and Bernd Sturmfels have worked on this and wrote several articles, which you can find here, here and here.
Computing tritangents numerically is the topic of this blog post. I want to follow the strategy proposed by Hauenstein et. al., who discuss how to compute tritangents using Bertini. I will repeat their computations using HomotopyContinuation.jl.
Let $\mathcal{V} = \mathcal{C}\cap \mathcal{Q}$ be a sextic in $\mathbb{C}^3$ and let $H\subset \mathbb{C}^3$ be an affine plane. Being a tritangent means that there exists three points $x,y,z\in \mathcal{V}$ with
$$x \in H, \mathrm{T}_x \mathcal{V} \subset H, \text{ and } y \in H, \mathrm{T}_y \mathcal{V} \subset H, \text{ and } z \in H, \mathrm{T}_z \mathcal{V} \subset H.$$
The points $x,y,z$ are the contact points of the plane $H$ with $\mathcal{V}$.
Let $h\in \mathbb{C}^3$ be a vector with $H=\{x\in \mathbb{C}^3: h^Tx=1\}$. Then, $H$ is a tritangent with contact points $x,y,z$, if and only if $(x,y,z,h)$ is a zero of the following polynomial system:
$$F = \begin{bmatrix} P(x,h) \\ P(y,h)\\ P(z,h)\end{bmatrix}, \text{ where } P(x,h) = \begin{bmatrix} h^T x - 1 \\ Q(x) \\ C(x) \\ \det([h \nabla_xQ \nabla_xC]) \end{bmatrix}.$$
Here, $\nabla_x$ denotes the gradient operator. Let us create the system $F$ in Julia. For simplicity, I will consider the case when $Q=x_3 - x_1x_2$.
using HomotopyContinuation, LinearAlgebra
@var h[1:3] # variables for the plane
@var x[1:3] y[1:3] z[1:3] #variables for the contact points
Q = x[3] - x[1] * x[2]
#the cubic C with coefficients c
C, c = dense_poly(x, 3, coeff_name = :c)
#generate the system P for the contact point x
P_x = [
h ⋅ x - 1;
Q;
C;
det([h differentiate(Q, x) differentiate(C, x)])
]
#generate a copy of P for the other contact points y,z
P_y = [p([h; x; c] => [h; y; c]) for p in P_x]
P_z = [p([h; x; c] => [h; z; c]) for p in P_x]
#define F
F = System([P_x; P_y; P_z]; variables = [h;x;y;z], parameters = c)
Let us first solve F by total degree homotopy when the coefficients of C are random complex numbers.
#create random complex coefficients for C
c₁ = randn(ComplexF64, 20)
#solve the system for c₁
S = solve(F; target_parameters = c)
On my laptop the computation takes 14 seconds. Here is what I get.
Result with 4020 solutions
==========================
• 12636 paths tracked
• 720 non-singular solutions (0 real)
• 3300 singular solutions (0 real)
• random_seed: 0x825eefa1
• start_system: :polyhedral
• multiplicity table of singular solutions:
┌───────┬───────┬────────┬────────────┐
│ mult. │ total │ # real │ # non-real │
├───────┼───────┼────────┼────────────┤
│ 1 │ 2499 │ 0 │ 2499 │
│ 2 │ 131 │ 0 │ 131 │
│ 3 │ 670 │ 0 │ 670 │
└───────┴───────┴────────┴────────────┘
The count of 720 is correct: for each of the 120 tritangents I get 6 solutions corresponding to all permutations of the contact points $x,y,z$ — and $6 \cdot 120 = 720$.
Let us extract the 720 solutions.
sols = solutions(S)
One may use sols in a parameter homotopy for computing the tritangents of other sextics. Here is code for tracking sols from c₁ to $x_1^3+x_2^3+x_3^3-1$.
#define the coefficients for C
c₀ = coeffs_as_dense_poly(x[1]^3+x[2]^3+x[3]^3-1, x, 3)
#track the solutions from c₁ to c₀
R = solve(F, sols, start_parameters = c₁, target_parameters = c₀)
On my laptop this computation takes 0.048781 seconds — tracking solutions from c₁ to c₀ is much faster than using the direct solve approach. Here is the summary of R:
Result with 684 solutions
=========================
• 720 paths tracked
• 684 non-singular solutions (24 real)
• random_seed: 0x2bae05f8
From the $4= \frac{24}{6}$ real solutions one was used in the gif above.
Cite this example:
@Misc{ tritangents2021 ,
author = { Paul Breiding },
title = { Tritangents },
howpublished = { \url{ https://www.JuliaHomotopyContinuation.org/examples/tritangents/ } },
note = { Accessed: February 15, 2021 }
}
|
2021-02-25 05:01:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6124898195266724, "perplexity": 2855.672222467432}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350717.8/warc/CC-MAIN-20210225041034-20210225071034-00345.warc.gz"}
|
http://www.mathworks.com/help/ident/gs/identify-low-order-transfer-functions-process-models-using-the-gui.html?requestedDomain=www.mathworks.com&nocookie=true
|
# Documentation
## Identify Low-Order Transfer Functions (Process Models) Using System Identification App
### Introduction
#### Objectives
Estimate and validate simple, continuous-time transfer functions from single-input/single-output (SISO) data to find the one that best describes the system dynamics.
After completing this tutorial, you will be able to accomplish the following tasks using the System Identification app :
• Import data objects from the MATLAB® workspace into the app.
• Plot and process the data.
• Estimate and validate low-order, continuous-time models from the data.
• Export models to the MATLAB workspace.
• Simulate the model using Simulink® software.
Note: This tutorial uses time-domain data to demonstrate how you can estimate linear models. The same workflow applies to fitting frequency-domain data.
#### Data Description
This tutorial uses the data file `proc_data.mat`, which contains 200 samples of simulated single-input/single-output (SISO) time-domain data. The input is a random binary signal that oscillates between -1 and 1. White noise (corresponding to a load disturbance) is added to the input with a standard deviation of 0.2, which results in a signal-to-noise ratio of about 20 dB. This data is simulated using a second-order system with underdamped modes (complex poles) and a peak response at 1 rad/s:
`$G\left(s\right)=\frac{1}{1+0.2s+{s}^{2}}{e}^{-2s}$`
The sample time of the simulation is 1 second.
### What Is a Continuous-Time Process Model?
Continuous-time process models are low-order transfer functions that describe the system dynamics using static gain, a time delay before the system output responds to the input, and characteristic time constants associated with poles and zeros. Such models are popular in the industry and are often used for tuning PID controllers, for example. Process model parameters have physical significance.
You can specify different process model structures by varying the number of poles, adding an integrator, or including a time delay or a zero. The highest process model order you can specify in this toolbox is three, and the poles can be real or complex (underdamped modes).
In general, a linear system is characterized by a transfer function G, which is an operator that takes the input u to the output y:
`$y=Gu$`
For a continuous-time system, G relates the Laplace transforms of the input U(s) and the output Y(s), as follows:
`$Y\left(s\right)=G\left(s\right)U\left(s\right)$`
In this tutorial, you estimate G using different process-model structures.
For example, the following model structure is a first-order, continuous-time model, where K is the static gain, Tp1 is a time constant, and Td is the input-to-output delay:
`$G\left(s\right)=\frac{K}{1+s{T}_{p1}}{e}^{-s{T}_{d}}$`
### Preparing Data for System Identification
#### Loading Data into the MATLAB Workspace
Load the data in `proc_data.mat` by typing the following command in the MATLAB Command Window:
```load proc_data ```
This command loads the data into the MATLAB workspace as the data object `z`. For more information about `iddata` objects, see the corresponding reference page.
#### Opening the System Identification App
To open the System Identification app , type the following command at the MATLAB Command Window:
`systemIdentification`
The default session name, `Untitled`, appears in the title bar.
#### Importing Data Objects into the System Identification App
You can import data object into the app from the MATLAB workspace.
You must have already loaded the sample data into MATLAB, as described in Loading Data into the MATLAB Workspace, and opened the app, as described in Opening the System Identification App.
If you have not performed these steps, click here to complete them.
To import a data object into the System Identification app :
1. Select Import data > Data object.
This action opens the Import Data dialog box.
2. In the Import Data dialog box, specify the following options:
• Object — Enter `z` as the name of the MATLAB variable that is the time-domain data object. Press Enter.
• Data name — Use the default name `z`, which is the same as the name of the data object you are importing. This name labels the data in the System Identification app after the import operation is completed.
• Starting time — Enter `0` as the starting time. This value designates the starting value of the time axis on time plots.
• Sample time — Enter `1` as the time between successive samples in seconds. This value represents the actual sample time in the experiment.
The Import Data dialog box now resembles the following figure.
3. Click Import to add the data to the System Identification app. The app adds an icon to represent the data.
4. Click Close to close the Import Data dialog box.
#### Plotting and Processing Data
In this portion of the tutorial, you evaluate the data and process it for system identification. You learn how to:
• Plot the data.
• Remove offsets by subtracting the mean values of the input and the output.
• Split the data into two parts. You use one part of the data for model estimation, and the other part of the data for model validation.
The reason you subtract the mean values from each signal is because, typically, you build linear models that describe the responses for deviations from a physical equilibrium. With steady-state data, it is reasonable to assume that the mean levels of the signals correspond to such an equilibrium. Thus, you can seek models around zero without modeling the absolute equilibrium levels in physical units.
You must have already imported data into the System Identification app, as described in Importing Data Objects into the System Identification App.
If you have not performed this step, click here to complete it.
To plot and process the data:
1. Select the Time plot check box to open the Time Plot window.
The bottom axes show the input data—a random binary sequence, and the top axes show the output data.
The next two steps demonstrate how to modify the axis limits in the plot.
2. To modify the vertical-axis limits for the input data, select Options > Set axes limits in the Time Plot figure window.
3. In the Limits for Time Plot dialog box, set the new vertical axis limit of the input data channel u1 to [-1.5 1.5]. Click Apply and Close.
Note: The other two fields in the Limits for Time Plot dialog box, Time and y1, let you set the axis limits for the time axis and the output channel axis, respectively. You can also specify each axis to be logarithmic or linear by selecting the corresponding option.
The following figure shows the updated time plot.
4. In the System Identification app , select <--Preprocess > Quick start to perform the following four actions:
• Subtract the mean value from each channel.
• Split the data into two parts.
• Specify the first part of the data as estimation data (or Working Data).
• Specify the second part of the data as Validation Data.
Learn More. For information about supported data processing operations, such as resampling and filtering the data, see Preprocess Data.
### Estimating a Second-Order Transfer Function (Process Model) with Complex Poles
#### Estimating a Second-Order Transfer Function Using Default Settings
In this portion of the tutorial, you estimate models with this structure:
`$G\left(s\right)=\frac{K}{\left(1+2\xi {T}_{w}s+{T}_{w}{}^{2}{s}^{2}\right)}{e}^{-{T}_{d}s}$`
You must have already processed the data for estimation, as described in Plotting and Processing Data.
If you have not performed this step, click here to complete it.
To identify a second-order transfer function:
1. In the System Identification app, select Estimate > Process models to open the Process Models dialog box.
2. In the Model Transfer Function area of the Process Models dialog box, specify the following options:
• Under Poles, select `2` and `Underdamped`.
This selection updates the Model Transfer Function to a second-order model structure that can contain complex poles.
• Make sure that the Zero and Integrator check boxes are cleared to exclude a zero and an integrator (self-regulating ) from the model.
3. The Parameter area of the Process Models dialog box now shows four active parameters: `K`, `Tw`, `Zeta`, and `Td`. In the Initial Guess area, keep the default `Auto-selected` option to calculate the initial parameter values during the estimation. The Initial Guess column in the Parameter table displays `Auto`.
4. Keep the default Bounds values, which specify the minimum and maximum values of each parameter.
Tip If you know the range of possible values for a parameter, you can type these values into the corresponding Bounds fields to help the estimation algorithm.
5. Keep the default settings for the estimation algorithm:
• Disturbance Model`None` means that the algorithm does not estimate the noise model. This option also sets the Focus to `Simulation`.
• Focus`Simulation` means that the estimation algorithm does not use the noise model to weigh the relative importance of how closely to fit the data in various frequency ranges. Instead, the algorithm uses the input spectrum in a particular frequency range to weigh the relative importance of the fit in that frequency range.
Tip The `Simulation` setting is optimized for identifying models that you plan to use for output simulation. If you plan to use your model for output prediction or control applications, or to improve parameter estimates using a noise model, select `Prediction`.
• Initial condition`Auto` means that the algorithm analyzes the data and chooses the optimum method for handling the initial state of the system. If you get poor results, you might try setting a specific method for handling initial states, rather than choosing it automatically.
• Covariance`Estimate` means that the algorithm computes parameter uncertainties that display as model confidence regions on plots.
The app assigns a name to the model, shown in the Name field (located at the bottom of the dialog box). By default, the name is the acronym `P2DU`, which indicates two poles (`P2`), a delay (`D`), and underdamped modes (`U`).
6. Click Estimate to add the model `P2DU` to the System Identification app.
#### Tips for Specifying Known Parameters
If you know a parameter value exactly, you can type this value in the Initial Guess column of the Process Models dialog box.
If you know the approximate value of a parameter, you can help the estimation algorithm by entering an initial value in the Initial Guess column. In this case, keep the Known check box cleared to allow the estimation to fine-tune this initial guess.
For example, to fix the time-delay value `Td` at `2`s, you can type this value into Value field of the Parameter table in the Process Models dialog box and select the corresponding Known check box.
#### Validating the Model
You can analyze the following plots to evaluate the quality of the model:
• Comparison of the model output and the measured output on a time plot
• Autocorrelation of the output residuals, and cross-correlation of the input and the output residuals
You must have already estimated the model, as described in Estimating a Second-Order Transfer Function Using Default Settings.
If you have not performed this step, click here to complete it.
Examining Model Output. You can use the model-output plot to check how well the model output matches the measured output in the validation data set. A good model is the simplest model that best describes the dynamics and successfully simulates or predicts the output for different inputs.
To generate the model-output plot, select the Model output check box in the System Identification app. If the plot is empty, click the model icon in the System Identification app window to display the model on the plot.
The System Identification Toolbox™ software uses input validation data as input to the model, and plots the simulated output on top of the output validation data. The preceding plot shows that the model output agrees well with the validation-data output.
The Best Fits area of the Model Output plot shows the agreement (in percent) between the model output and the validation-data output.
Recall that the data was simulated using the following second-order system with underdamped modes (complex poles), as described in Data Description, and has a peak response at 1 rad/s:
`$G\left(s\right)=\frac{1}{1+0.2s+{s}^{2}}{e}^{-2s}$`
Because the data includes noise at the input during the simulation, the estimated model cannot exactly reproduce the model used to simulate the data.
Examining Model Residuals. You can validate a model by checking the behavior of its residuals.
To generate a Residual Analysis plot, select the Model resids check box in the System Identification app.
The top axes show the autocorrelation of residuals for the output (whiteness test). The horizontal scale is the number of lags, which is the time difference (in samples) between the signals at which the correlation is estimated. Any fluctuations within the confidence interval are considered to be insignificant. A good model should have a residual autocorrelation function within the confidence interval, indicating that the residuals are uncorrelated. However, in this example, the residuals appear to be correlated, which is natural because the noise model is used to make the residuals white.
The bottom axes show the cross-correlation of the residuals with the input. A good model should have residuals uncorrelated with past inputs (independence test). Evidence of correlation indicates that the model does not describe how a portion of the output relates to the corresponding input. For example, when there is a peak outside the confidence interval for lag k, this means that the contribution to the output y(t) that originates from the input u(t-k) is not properly described by the model. In this example, there is no correlation between the residuals and the inputs.
Thus, residual analysis indicates that this model is good, but that there might be a need for a noise model.
### Estimating a Process Model with a Noise Component
#### Estimating a Second-Order Process Model with Complex Poles
In this portion of the tutorial, you estimate a second-order transfer function and include a noise model. By including a noise model, you optimize the estimation results for prediction application.
You must have already estimated the model, as described in Estimating a Second-Order Transfer Function Using Default Settings.
If you have not performed this step, click here to complete it.
To estimate a second-order transfer function with noise:
1. If the Process Models dialog box is not open, select Estimate > Process Models in the System Identification app. This action opens the Process Models dialog box.
2. In the Model Transfer Function area, specify the following options:
• Under Poles, select `2` and `Underdamped`. This selection updates the Model Transfer Function to a second-order model structure that can contain complex poles. Make sure that the Zero and Integrator check boxes are cleared to exclude a zero and an integrator (self-regulating ) from the model.
• Disturbance Model — Set to `Order 1` to estimate a noise model H as a continuous-time, first-order ARMA model:
`$H=\frac{C}{D}e$`
where and D are first-order polynomials, and e is white noise.
This action specifies the Focus as `Prediction`, which improves accuracy in the frequency range where the noise level is low. For example, if there is more noise at high frequencies, the algorithm assigns less importance to accurately fitting the high-frequency portions of the data.
• Name — Edit the model name to `P2DUe1` to generate a model with a unique name in the System Identification app.
3. Click Estimate.
4. In the Process Models dialog box, set the Disturbance Model to `Order 2` to estimate a second-order noise model.
5. Edit the Name field to `P2DUe2` to generate a model with a unique name in the System Identification app.
6. Click Estimate.
#### Validating the Models
In this portion of the tutorial, you evaluate model performance using the Model Output and the Residual Analysis plots.
You must have already estimated the models, as described in Estimating a Second-Order Transfer Function Using Default Settings and Estimating a Second-Order Process Model with Complex Poles.
If you have not performed these steps, click here to complete them.
Comparing the Model Output Plots. To generate the Model Output plot, select the Model output check box in the System Identification app. If the plot is empty or a model output does not appear on the plot, click the model icons in the System Identification app window to display these models on the plot.
The following Model Output plot shows the simulated model output, by default. The simulated response of the models is approximately the same for models with and without noise. Thus, including the noise model does not affect the simulated output.
To view the predicted model output, select Options > 5 step ahead predicted output in the Model Output plot window.
The following Model Output plot shows that the predicted model output of `P2DUe2` (with a second-order noise model) is better than the predicted output of the other two models (without noise and with a first-order noise model, respectively).
Comparing the Residual Analysis Plots. To generate the Residual Analysis plot, select the Model resids check box in the System Identification app. If the plot is empty, click the model icons in the System Identification app window to display these models on the plot.
`P2DUe2` falls well within the confidence bounds on the Residual Analysis plot.
To view residuals for `P2DUe2` only, remove models `P2DU` and `P2DUe1` from the Residual Analysis plot by clicking the corresponding icons in the System Identification app.
The Residual Analysis plot updates, as shown in the following figure.
The whiteness test for `P2DUe2` shows that the residuals are uncorrelated, and the independence test shows no correlation between the residuals and the inputs. These tests indicate that `P2DUe2` is a good model.
### Viewing Model Parameters
#### Viewing Model Parameter Values
You can view the numerical parameter values and other information about the model `P2DUe2` by right-clicking the model icon in the System Identification app . The Data/model Info dialog box opens.
The noneditable area of the dialog box lists the model coefficients that correspond to the following model structure:
`$G\left(s\right)=\frac{K}{\left(1+2\xi {T}_{w}s+{T}_{w}{}^{2}{s}^{2}\right)}{e}^{-{T}_{d}s}$`
The coefficients agree with the model used to simulate the data:
`$G\left(s\right)=\frac{1}{1+0.2s+{s}^{2}}{e}^{-2s}$`
#### Viewing Parameter Uncertainties
To view parameter uncertainties for the system transfer function, click Present in the Data/model Info dialog box, and view the information in the MATLAB Command Window.
```Kp = 0.99821 +/- 0.019982 Tw = 0.99987 +/- 0.0037697 Zeta = 0.10828 +/- 0.0042304 Td = 2.004 +/- 0.0029717 ```
The 1-standard-deviation uncertainty for each model parameter follows the `+/-` symbol.
`P2DUe2` also includes an additive noise term, where H is a second-order ARMA model and e is white noise:
`$H=\frac{C}{D}e$`
The software displays the noise model H as a ratio of two polynomials, `C(s)/D(s)`, where:
``` C(s) = s^2 + 2.186 (+/- 0.08467) s + 1.089 (+/- 0.07951) D(s) = s^2 + 0.2561 (+/- 0.09044) s + 0.5969 (+/- 0.3046) ```
The 1-standard deviation uncertainty for the model parameters is in parentheses next to each parameter value.
### Exporting the Model to the MATLAB Workspace
You can perform further analysis on your estimated models from the MATLAB workspace. For example, if the model is a plant that requires a controller, you can import the model from the MATLAB workspace into the Control System Toolbox™ product. Furthermore, to simulate your model in the Simulink software (perhaps as part of a larger dynamic system), you can import this model as a Simulink block.
The models you create in the System Identification app are not automatically available in the MATLAB workspace. To make a model available to other toolboxes, Simulink, and the System Identification Toolbox commands, you must export your model from the System Identification app to the MATLAB workspace.
To export the `P2DUe2` model, drag the model icon to the To Workspace rectangle in the System Identification app. Alternatively, click Export in the Data/model Info dialog box. The model now appears in the MATLAB Workspace browser.
Note: This model is an `idproc` model object.
### Simulating a System Identification Toolbox Model in Simulink Software
#### Prerequisites for This Tutorial
In this tutorial, you create a simple Simulink model that uses blocks from the System Identification Toolbox library to bring the data `z` and the model `P2DUe2` into Simulink.
To perform the steps in this tutorial, Simulink must be installed on your computer.
Furthermore, you must have already performed the following steps:
If you have not performed these steps, click here to complete them. Then, drag the `z` and the `P2DUe2` icons to the To Workspace rectangle in the System Identification app. Alternatively, click Export in the Data/model Info dialog box. The data and the model now appear in the MATLAB Workspace browser.
#### Preparing Input Data
Use the input channel of the data set `z` as input for simulating the model output by typing the following in the MATLAB Command Window:
```z_input = z; % Creates a new iddata object. z_input.y = []; % Sets the output channel % to empty.```
Alternatively, you can specify any input signal.
Learn More. For more information about representing data signals for system identification, see Representing Data in MATLAB Workspace.
#### Building the Simulink Model
To add blocks to a Simulink model:
1. On the MATLAB Home tab, click Simulink.
2. In the Simulink start page, click Blank Model. Then click Create Model to open a new model window.
3. In the Simulink model window, click to open the Library Browser. In the Library Browser, select the System Identification Toolbox library. The right side of the window displays blocks specific to the System Identification Toolbox product.
Tip Alternatively, to access the System Identification block library, type `slident` in the MATLAB Command Window.
4. Drag the following System Identification Toolbox blocks to the new model window:
• IDDATA Sink block
• IDDATA Source block
• IDMODEL model block
5. In the Simulink Library Browser, select the Simulink > Sinks library, and drag the Scope block to the new model window.
6. In the Simulink model window, connect the blocks to resembles the following figure.
Next, you configure these blocks to get data from the MATLAB workspace and set the simulation time interval and duration.
#### Configuring Blocks and Simulation Parameters
This procedure guides you through the following tasks to configure the model blocks:
• Getting data from the MATLAB workspace.
• Setting the simulation time interval and duration.
1. In the Simulink Editor, select Simulation > Model Configuration Parameters.
2. In the Configuration Parameters dialog box, in the Solver subpane, in the Stop time field, type `200`. Click OK.
This value sets the duration of the simulation to 200 seconds.
3. Double-click the Iddata Source block to open the Source Block Parameters: Iddata Source dialog box. Then, type the following variable name in the IDDATA object field:
`z_input`
This variable is the data object in the MATLAB workspace that contains the input data.
Tip As a shortcut, you can drag and drop the variable name from the MATLAB Workspace browser to the IDDATA object field.
Click OK.
4. Double-click the Idmodel block to open the Function Block Parameters: Idmodel dialog box.
1. Type the following variable name in the Model variable field:
`P2DUe2`
This variable represents the name of the model in the MATLAB workspace.
2. Clear the Add noise check box to exclude noise from the simulation. Click OK.
When Add noise is selected, Simulink derives the noise amplitude from the `NoiseVariance` property of the model and adds noise to the model accordingly. The simulation propagates this noise according to the noise model H that was estimated with the system dynamics:
`$H=\frac{C}{D}e$`
Click OK.
5. Double-click the Iddata Sink block to open the Sink Block Parameters: Iddata Sink dialog box. Type the following variable name in the IDDATA Name field:
`z_sim_out`
6. Type `1` in the Sample Time (sec.) field to set the sample time of the output data to match the sample time of the input data.
Click OK.
The resulting change to the Simulink model is shown in the following figure.
#### Running the Simulation
1. In the Simulink Editor, select Simulation > Run.
2. Double-click the Scope block to display the time plot of the model output.
3. In the MATLAB Workspace browser, notice the variable `z_sim_out` that stores the model output as an `iddata` object. You specified this variable name when you configured the Iddata Sink block.
This variable stores the simulated output of the model, and it is now available for further processing and exploration.
Was this topic helpful?
Get trial now
|
2016-05-06 15:33:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46458879113197327, "perplexity": 1504.0588304119588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861831994.45/warc/CC-MAIN-20160428164351-00017-ip-10-239-7-51.ec2.internal.warc.gz"}
|
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=rcd&paperid=64&option_lang=eng
|
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Impact factor Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Regul. Chaotic Dyn.: Year: Volume: Issue: Page: Find
Regul. Chaotic Dyn., 2016, Volume 21, Issue 1, Pages 1–17 (Mi rcd64)
Topological Analysis Corresponding to the Borisov–Mamaev–Sokolov Integrable System on the Lie Algebra $so(4)$
Department of Fundamental Sciences, Azarbaijan Shahid Madani University, 35 Km Tabriz-Maragheh Road, Tabriz, Iran
Abstract: In 2001, A. V. Borisov, I. S. Mamaev, and V. V. Sokolov discovered a new integrable case on the Lie algebra $so(4)$. This is a Hamiltonian system with two degrees of freedom, where both the Hamiltonian and the additional integral are homogenous polynomials of degrees 2 and 4, respectively. In this paper, the topology of isoenergy surfaces for the integrable case under consideration on the Lie algebra $so(4)$ and the critical points of the Hamiltonian under consideration for different values of parameters are described and the bifurcation values of the Hamiltonian are constructed. Also, a description of bifurcation complexes and typical forms of the bifurcation diagram of the system are presented.
Keywords: topology, integrable Hamiltonian systems, isoenergy surfaces, critical set, bifurcation diagram, bifurcation complex, periodic trajectory
DOI: https://doi.org/10.1134/S1560354716010019
References: PDF file HTML file
Bibliographic databases:
MSC: 37Jxx, 70H06, 70E50, 70G40, 70H14
Accepted:20.12.2015
Language:
Citation: Rasoul Akbarzadeh, “Topological Analysis Corresponding to the Borisov–Mamaev–Sokolov Integrable System on the Lie Algebra $so(4)$”, Regul. Chaotic Dyn., 21:1 (2016), 1–17
Citation in format AMSBIB
\Bibitem{Akb16} \by Rasoul Akbarzadeh \paper Topological Analysis Corresponding to the Borisov–Mamaev–Sokolov Integrable System on the Lie Algebra $so(4)$ \jour Regul. Chaotic Dyn. \yr 2016 \vol 21 \issue 1 \pages 1--17 \mathnet{http://mi.mathnet.ru/rcd64} \crossref{https://doi.org/10.1134/S1560354716010019} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=3457073} \zmath{https://zbmath.org/?q=an:06580139} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000373028300001} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-84957586219}
• http://mi.mathnet.ru/eng/rcd64
• http://mi.mathnet.ru/eng/rcd/v21/i1/p1
SHARE:
Citing articles on Google Scholar: Russian citations, English citations
Related articles on Google Scholar: Russian articles, English articles
This publication is cited in the following articles:
1. Pavel E. Ryabov, Andrej A. Oshemkov, Sergei V. Sokolov, “The Integrable Case of Adler – van Moerbeke. Discriminant Set and Bifurcation Diagram”, Regul. Chaotic Dyn., 21:5 (2016), 581–592
2. A. A. Oshemkov, P. E. Ryabov, S. V. Sokolov, “Explicit determination of certain periodic motions of a generalized two-field gyrostat”, Russ. J. Math. Phys., 24:4 (2017), 517–525
3. P. E. Ryabov, “Explicit integration of the system of invariant relations for the case of M. Adler and P. van Moerbeke”, Dokl. Math., 95:1 (2017), 17–20
4. R. Akbarzadeh, “The topology of isoenergetic surfaces for the Borisov–Mamaev–Sokolov integrable case on the Lie algebra $so(3,1)$”, Theoret. and Math. Phys., 197:3 (2018), 1727–1736
• Number of views: This page: 166 References: 20
|
2020-01-20 08:59:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5050959587097168, "perplexity": 7585.079612102103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598217.23/warc/CC-MAIN-20200120081337-20200120105337-00491.warc.gz"}
|
https://sio2.mimuw.edu.pl/c/pa-2019-1/s/351241/source/
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 #include #include #include void one() { int n; std::cin >> n; std::map from, to; for (int i = 0; i < n; ++i) { int l, a, b; std::cin >> l >> a >> b; from[a] += l; to[b] += l; } int diff = 0; auto p = from.begin(); auto q = to.begin(); for (;;) { if (p->second < q->second) { diff -= p->first * p->second; diff += q->first * p->second; q->second -= p->second; ++p; } else if (p->second == q->second) { diff -= p->first * p->second; diff += q->first * q->second; ++p; ++q; } else { diff -= p->first * q->second; diff += q->first * q->second; p->second -= q->second; ++q; } if (diff < 0 || p == from.end()) { break; } } std::cout << (diff == 0 ? "TAK" : "NIE") << "\n"; } int main() { int t; std::cin >> t; for (int i = 0; i < t; ++i) { one(); } }
|
2022-09-30 15:56:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39452382922172546, "perplexity": 2022.286136355893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00561.warc.gz"}
|
https://www.semanticscholar.org/paper/Magnetic-field-influence-on-the-early-time-dynamics-Greif-Greiner/ba3a8176e3f54a3642362243acbfc67ac80968bd
|
# Magnetic field influence on the early time dynamics of heavy-ion collisions
@article{Greif2017MagneticFI,
title={Magnetic field influence on the early time dynamics of heavy-ion collisions},
author={Moritz Greif and Carsten Greiner and Zhe Xu},
journal={Physical Review C},
year={2017},
volume={96},
pages={014903}
}
• Published 21 April 2017
• Physics
• Physical Review C
In high-energy heavy-ion collisions, the magnetic field is very strong right after the nuclei penetrate each other and a nonequilibrium system of quarks and gluons builds up. Even though quarks might not be very abundant initially, their dynamics must necessarily be influenced by the Lorentz force. Employing the (3+1)-d partonic cascade Boltzmann approach to multiparton scatterings (BAMPS), we show that the circular Larmor movement of the quarks leads to a strong positive anisotropic flow of…
15 Citations
## Figures from this paper
### Magnetic fields in heavy ion collisions: flow and charge transport
• Physics
The European Physical Journal C
• 2020
At the earliest times after a heavy-ion collision, the magnetic field created by the spectator nucleons will generate an extremely strong, albeit rapidly decreasing in time, magnetic field. The
### Magnetic field in expanding quark-gluon plasma
• Physics
• 2018
Intense electromagnetic fields are created in the quark-gluon plasma by the external ultrarelativistic valence charges. The time evolution and the strength of this field are strongly affected by the
### Effect of intense magnetic fields on reduced-magnetohydrodynamics evolution in sNN=200 GeV Au + Au collisions
• Physics
• 2017
We investigate the effect of large magnetic fields on the $2+1$ dimensional reduced-magnetohydrodynamical expansion of hot and dense nuclear matter produced in $\sqrt{s_{\rm NN}}$ = 200 GeV Au+Au
### Transverse expansion of hot magnetized Bjorken flow in heavy ion collisions
• Physics
The European Physical Journal C
• 2019
We argue that the existence of an inhomogeneous external magnetic field can lead to radial flow in transverse plane. Our aim is to show how the introduction of a magnetic field generalizes the
### Transverse expansion of (1 + 2) dimensional magneto-hydrodynamics flow with longitudinal boost invariance
• Physics
• 2020
In the present work, we investigate the effects of magnetic field on expanding hot and dense nuclear matter as an ideal fluid. We consider QGP, on the particular case of a (1 + 2) dimensional
### Transverse expansion of ideal-magneto-hydrodynamics flow in (1+2D) Bjorken scenario
• Physics
• 2020
In the present work, we investigate the effect of an external magnetic field on the dimensional magnetohydrodynamical expansion of hot and dense nuclear matter as an ideal fluid with infinite
### Magneto-vortical evolution of QGP in heavy ion collisions
• Physics
Journal of Physics G: Nuclear and Particle Physics
• 2018
The interplay of magnetic field and thermal vorticity in a relativistic ideal fluid might generate fluid vorticity during the fluid evolution provided the flow fields and the entropy density of the
### Relativistic dynamics of point magnetic moment
• Physics
• 2017
The covariant motion of a classical point particle with magnetic moment in the presence of (external) electromagnetic fields is revisited. We are interested in understanding extensions to the Lorentz
### Relativistic non-resistive viscous magnetohydrodynamics from the kinetic theory: a relaxation time approach
• Physics
• 2020
We derive the relativistic non-resistive, viscous second-order magnetohydrodynamic equations for the dissipative quantities using the relaxation time approximation. The Boltzmann equation is solved
## References
SHOWING 1-10 OF 33 REFERENCES
### The Relativistic Boltzmann Equation: Theory and Applications
• Physics
• 2012
1 Special Relativity.- 1.1 Introduction.- 1.2 Lorentz transformations.- 1.3 Tensors in Minkowski spaces.- 1.4 Relativistic mechanics.- 1.4.1 Four-velocity.- 1.4.2 Minkowski force.- 1.4.3 Elastic
### Phys
• Lett. B710, 171
• 2012
### arXiv:1509.07758 [nucl-ex
• (PHENIX), Phys. Rev. C94,
• 2016
### Phys
• Rev. C76, 024911
• 2007
### Phys
• Rev. C88, 024911
• 2013
### Nucl
• Phys. A803, 227
• 2008
### arXiv:0905.1070 [nucl-ex
• (PHENIX), Phys. Rev. C80,
• 2009
### Phys
• Rev. Lett. 114, 112301
• 2015
### Phys
• Rev. C 83, 054911
• 2011
|
2022-10-07 16:56:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27533939480781555, "perplexity": 4838.543972043219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00172.warc.gz"}
|
https://asvabtestpro.com/quiz/convert-3-meters-to-feet-627936391cb4089b0e7a6fe4/
|
# Convert 3 meters to feet.
9.843 feet
Explanation
There are 2.54 centimeters (cm) per inch and 100 cm per meter so there are
$$25.4 \frac{\mathrm{cm}}{i n} / 100 \frac{\mathrm{cm}}{i n}=0.0254 \frac{\mathrm{m}}{i n}$$
There are 12 inches (in) in a foot so there are
$$0.0254\frac{m}{{in}} \times \frac{{12in}}{{ft}} = 0.3048\frac{m}{{ft}}.3m/0.3048\frac{m}{{ft}} = 9.843 \text { feet }$$
|
2022-11-28 01:31:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6797329187393188, "perplexity": 4442.050586535674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710462.59/warc/CC-MAIN-20221128002256-20221128032256-00093.warc.gz"}
|
https://gmatclub.com/forum/in-how-many-ways-can-7-identical-balls-be-placed-into-four-boxes-p-q-r-325548.html
|
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 13 Jul 2020, 18:31
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# In how many ways can 7 identical balls be placed into four boxes P,Q,R
Author Message
TAGS:
### Hide Tags
Manager
Joined: 25 Jan 2020
Posts: 60
Location: India
Concentration: Technology, Strategy
GPA: 3.2
WE: Information Technology (Retail Banking)
In how many ways can 7 identical balls be placed into four boxes P,Q,R [#permalink]
### Show Tags
31 May 2020, 02:01
1
5
00:00
Difficulty:
65% (hard)
Question Stats:
32% (01:26) correct 68% (01:55) wrong based on 31 sessions
### HideShow timer Statistics
In how many ways can 7 identical balls be placed into four boxes P,Q,R,S such that the two boxes P and Q have atleast one ball each?
A.84
B.70
C.120
D.56
E.54
Math Expert
Joined: 02 Aug 2009
Posts: 8755
Re: In how many ways can 7 identical balls be placed into four boxes P,Q,R [#permalink]
### Show Tags
31 May 2020, 02:45
2
In how many ways can 7 identical balls be placed into four boxes P,Q,R,S such that the two boxes P and Q have atleast one ball each?
A.84
B.70
C.120
D.56
E.54
As P and Q have to have at least one ball, let us give one each to P and Q. Now we are left with 5 balls and they have to be distributed amongst P, Q, R and S.
So P+Q+R+S=5
Now the direct formula of putting three partitions in 5 balls makes it 5+3 or 8 out of which we have to select these 3 partitions. => 8C3=8*7*6/3!=56
D
Since 5 is not a big number, we can find each way of distribution as
1) 5,0,0,0 —4 way
2) 4,1,0,0 — 4!/2!=12
3) 3,2,0,0 — 12 ways
4) 3,1,1,0 — 12 ways
5) 2,2,1,0– 12 ways
6) 2,1,1,1– 4 ways
Total 56 ways
_________________
Senior Manager
Joined: 10 Dec 2017
Posts: 280
Location: India
Re: In how many ways can 7 identical balls be placed into four boxes P,Q,R [#permalink]
### Show Tags
31 May 2020, 02:57
1
In how many ways can 7 identical balls be placed into four boxes P,Q,R,S such that the two boxes P and Q have atleast one ball each?
A.84
B.70
C.120
D.56
E.54
Though i got it wrong, here is my solution
P(1), Q(1), (R+S)[5]=1*1*6( because 5 balls can be distributed among R and S in (0,5),(1,4),(2,3)*(2) as they can either be in R or S=6 ways
[P+Q](3) R+S(4)= one ball in P , two ball in Q or Reverse so 2 ways * for R and S ( 0,4),(1,3),(2,2) 5 ways as 2,2 is same for R or S=10 ways
P+Q=4, R+S=3 Total 12 ways P+Q=(1,3),(2,2),(3,1) and R+S=(0,3),(1,2),(2,1),(3,0)
P+Q=5, R+S=2 Total 12 ways P+Q=(1,4), (4,1),(2,3),(3,2) R+S= (0,2),(2,0),(1,1)
P+Q=6, R+S=1 Total 10 ways P+Q= (1,5),(5,1),(3,3),(2,4),(4,2) R+S=(0,1),(1,0)
P+Q=7 R+S=0 Total 6 ways P+Q= (1,6),(6,1),(2,5),(5,2),(3,4),(4,3)
Total=6+10+12+12+10+6
56
D:)
PS Forum Moderator
Joined: 18 Jan 2020
Posts: 1175
Location: India
GPA: 4
Re: In how many ways can 7 identical balls be placed into four boxes P,Q,R [#permalink]
### Show Tags
31 May 2020, 03:05
chetan2u wrote:
In how many ways can 7 identical balls be placed into four boxes P,Q,R,S such that the two boxes P and Q have atleast one ball each?
A.84
B.70
C.120
D.56
E.54
As P and Q have to have at least one ball, let us give one each to P and Q. Now we are left with 5 balls and they have to be distributed amongst P, Q, R and S.
So P+Q+R+S=5
Now the direct formula of putting three partitions in 5 balls makes it 5+3 or 8 out of which we have to select these 3 partitions. => 8C3=8*7*6/3!=56
D
Since 5 is not a big number, we can find each way of distribution as
1) 5,0,0,0 —4 way
2) 4,1,0,0 — 4!/2!=12
3) 3,2,0,0 — 12 ways
4) 3,1,1,0 — 12 ways
5) 2,2,1,0– 12 ways
6) 2,1,1,1– 4 ways
Total 56 ways
Sir can you explain this highlighted part. Why we put 3 partition in 5. When there are 4 boxes.
Posted from my mobile device
Math Expert
Joined: 02 Aug 2009
Posts: 8755
Re: In how many ways can 7 identical balls be placed into four boxes P,Q,R [#permalink]
### Show Tags
31 May 2020, 03:13
1
yashikaaggarwal wrote:
chetan2u wrote:
In how many ways can 7 identical balls be placed into four boxes P,Q,R,S such that the two boxes P and Q have atleast one ball each?
A.84
B.70
C.120
D.56
E.54
As P and Q have to have at least one ball, let us give one each to P and Q. Now we are left with 5 balls and they have to be distributed amongst P, Q, R and S.
So P+Q+R+S=5
Now the direct formula of putting three partitions in 5 balls makes it 5+3 or 8 out of which we have to select these 3 partitions. => 8C3=8*7*6/3!=56
D
Since 5 is not a big number, we can find each way of distribution as
1) 5,0,0,0 —4 way
2) 4,1,0,0 — 4!/2!=12
3) 3,2,0,0 — 12 ways
4) 3,1,1,0 — 12 ways
5) 2,2,1,0– 12 ways
6) 2,1,1,1– 4 ways
Total 56 ways
Sir can you explain this highlighted part. Why we put 3 partition in 5. When there are 4 boxes.
Posted from my mobile device
Hi
When we put 3 partitions, we get items in 4 places.
oolololo.....lines show partition , so 2,1,1,1
loollooo.....so 0,2,0,3
And so on
_________________
Intern
Joined: 17 Oct 2019
Posts: 17
Re: In how many ways can 7 identical balls be placed into four boxes P,Q,R [#permalink]
### Show Tags
31 May 2020, 03:19
chetan2u wrote:
Now the direct formula of putting three partitions in 5 balls makes it 5+3 or 8 out of which we have to select these 3 partitions. => 8C3=8*7*6/3!=56
Can you please explain this a bit. Why are you doing 5+3?
GMAT Club Legend
Joined: 18 Aug 2017
Posts: 6447
Location: India
Concentration: Sustainability, Marketing
GPA: 4
WE: Marketing (Energy and Utilities)
Re: In how many ways can 7 identical balls be placed into four boxes P,Q,R [#permalink]
### Show Tags
31 May 2020, 03:24
In how many ways can 7 identical balls be placed into four boxes P,Q,R,S such that the two boxes P and Q have atleast one ball each?
A.84
B.70
C.120
D.56
E.54
given that out of 7 balls P,Q need to have atleast one ball each so we put 1 ball each in P,Q
left with 5 balls which can be arranged in 4 boxes P,Q,R,S in
(5+4-1) c ( 4-1) ways
or say 8c3 ways
OPTION D ; 56
Math Expert
Joined: 02 Aug 2009
Posts: 8755
Re: In how many ways can 7 identical balls be placed into four boxes P,Q,R [#permalink]
### Show Tags
31 May 2020, 03:26
Ruchirkalra wrote:
chetan2u wrote:
Now the direct formula of putting three partitions in 5 balls makes it 5+3 or 8 out of which we have to select these 3 partitions. => 8C3=8*7*6/3!=56
Can you please explain this a bit. Why are you doing 5+3?
There are 5 balls left and we can put them in any way.
So ooooo, what we do is we take 3 partitions so that we can place those partitions between these 5 balls so that these 5 balls are distributed in 4 parts.
ooooolll, so total we have 5+3 places and the 3 partitions can take any of the 8 locations.
It can be
lllooooo....0,0,0,5
loololoo....0,2,1,2
ollloooo....1,0,0,4 and so on
This is the direct method for such linear equations where you have sum of variables on one side and a numerical value on the other side
_________________
GMAT Club Legend
Status: GMATINSIGHT Tutor
Joined: 08 Jul 2010
Posts: 4361
Location: India
GMAT: QUANT EXPERT
Schools: IIM (A)
GMAT 1: 750 Q51 V41
WE: Education (Education)
In how many ways can 7 identical balls be placed into four boxes P,Q,R [#permalink]
### Show Tags
Updated on: 31 May 2020, 05:05
2
2
In how many ways can 7 identical balls be placed into four boxes P,Q,R,S such that the two boxes P and Q have atleast one ball each?
A.84
B.70
C.120
D.56
E.54
$$P+Q+R+S = 7$$
$$P_{min} = 1$$
$$Q_{min} = 1$$
Let's give one ball in each box P and Q, so now we have 5 more balls to distribute among 4 Boxes P, Q, R and S
i.e. New Equation, P+Q+R+S = 5
Now, we have find non-negative solution to this equation
For any equation a+b+c+d = n
where n = Total Balls to be distributed and r = 4 (Number of variables)
The number of Non-negative solutions $$= (n+r-1)C_{r-1}$$
i.e. Total Non-negative Integer solutions of P+Q+R+S = 5 will be $$(5+4-1)C_{4-1} = 8C2 = 56$$
DERIVATION OF PARTITION RULE:
_________________
Prepare with PERFECTION to claim Q≥50 and V≥40 !!!
GMATinsight .............(Bhoopendra Singh and Dr.Sushma Jha)
e-mail: info@GMATinsight.com l Call : +91-9999687183 / 9891333772
One-on-One Skype classes l Classroom Coaching l On-demand Quant course l Admissions Consulting
Most affordable l Comprehensive l 2000+ Qn ALL with Video explanations l LINK: Courses and Pricing
Our SUCCESS STORIES: From 620 to 760 l Q-42 to Q-49 in 40 days l 590 to 710 + Wharton l
FREE GMAT Resource: 22 FREE (FULL LENGTH) GMAT CATs LINKS l NEW OG QUANT 50 Qn+VIDEO Sol.
Originally posted by GMATinsight on 31 May 2020, 03:52.
Last edited by GMATinsight on 31 May 2020, 05:05, edited 1 time in total.
VP
Joined: 09 Mar 2016
Posts: 1256
Re: In how many ways can 7 identical balls be placed into four boxes P,Q,R [#permalink]
### Show Tags
31 May 2020, 04:55
In how many ways can 7 identical balls be placed into four boxes P,Q,R,S such that the two boxes P and Q have atleast one ball each?
A.84
B.70
C.120
D.56
E.54
P 4.......2........3........5........4
Q 1.......2.........2.......2........2
R 1.......2.........1.......0.........1
S 1.......1.........1.......0.........0
So $$\frac{4!}{3!} + \frac{4!}{3!}+\frac{4!}{2!} +\frac{4!}{2!} +\frac{4!}{1! }= 4+4+12+12+24 = 56$$
So D
I wonder if there is a shortcut solution to this question, cause it took me more than 4 min to figure out the answer
GMAT Club Legend
Status: GMATINSIGHT Tutor
Joined: 08 Jul 2010
Posts: 4361
Location: India
GMAT: QUANT EXPERT
Schools: IIM (A)
GMAT 1: 750 Q51 V41
WE: Education (Education)
Re: In how many ways can 7 identical balls be placed into four boxes P,Q,R [#permalink]
### Show Tags
31 May 2020, 05:03
1
dave13 wrote:
In how many ways can 7 identical balls be placed into four boxes P,Q,R,S such that the two boxes P and Q have atleast one ball each?
A.84
B.70
C.120
D.56
E.54
P 4.......2........3........5........4
Q 1.......2.........2.......2........2
R 1.......2.........1.......0.........1
S 1.......1.........1.......0.........0
So $$\frac{4!}{3!} + \frac{4!}{3!}+\frac{4!}{2!} +\frac{4!}{2!} +\frac{4!}{1! }= 4+4+12+12+24 = 56$$
So D
I wonder if there is a shortcut solution to this question, cause it took me more than 4 min to figure out the answer
dave13
This is a typical question based on partition rule. So if one needs to do it in faster way then only wa forward I see is the explanation I have mentioned above your post.
To understand Partition Rule, refer to the video attached.
_________________
Prepare with PERFECTION to claim Q≥50 and V≥40 !!!
GMATinsight .............(Bhoopendra Singh and Dr.Sushma Jha)
e-mail: info@GMATinsight.com l Call : +91-9999687183 / 9891333772
One-on-One Skype classes l Classroom Coaching l On-demand Quant course l Admissions Consulting
Most affordable l Comprehensive l 2000+ Qn ALL with Video explanations l LINK: Courses and Pricing
Our SUCCESS STORIES: From 620 to 760 l Q-42 to Q-49 in 40 days l 590 to 710 + Wharton l
FREE GMAT Resource: 22 FREE (FULL LENGTH) GMAT CATs LINKS l NEW OG QUANT 50 Qn+VIDEO Sol.
Re: In how many ways can 7 identical balls be placed into four boxes P,Q,R [#permalink] 31 May 2020, 05:03
|
2020-07-14 02:31:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.862708568572998, "perplexity": 2588.7484683159355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657147917.99/warc/CC-MAIN-20200714020904-20200714050904-00581.warc.gz"}
|
https://blog.quantinsti.com/iron-condor-options-trading-strategy/
|
### Introduction
I have been trying to cover some of the simplest Option strategies including the Options Strangle Strategy and the Bull Call Spread Strategy which can be easily practised by traders who are new to Options. If you are new to options trading then you can check the options trading for dummies free course on Quantra. This time I will cover the Iron Condor trading strategy.
Anyone who trades in Option is well aware that they are constantly fighting against time decay, especially when you are on the buy side. A lot of strategies that are being practised are designed with an objective to have the time factor work for them rather than the other way around. One such strategy that can make the time decay work in your favour is the ‘Iron Condor’.
### What Is Iron Condor Trading Strategy?
Iron Condor strategy is one of the simplest strategies that can be practised by traders even with a small account. For people who are familiar with other basic Option trading strategies, Iron Condor strategy is basically a combination of the bull put spread and bear call spread Option trading strategy.
Another simpler explanation for those who are not aware of the above-mentioned strategies is that Iron Condor strategy is a four-legged trade that starts with selling out-of-the-money put and selling out-of-the-money call for the same underlying security and expiration date. The trader will hope that the stock price stays between these positions at the time of its expiration. But shorting options can involve a lot of risks and any unfavourable situation can result in a tremendous loss. Hence, to protect this risk the trader buys further out-of-the-money put and call, these four options are together called as Iron Condor strategy.
It is important to understand that Iron Condor strategy is a limited risk strategy and works best in a stable market with low volatility which can help the trader to earn limited profits.
#### Strategy Characteristics
Moneyness of the options:
Sell 1 OTM Put (Higher Strike) Sell 1 OTM Call (Lower Strike) Buy 1 OTM Put (Lower Strike) Buy 1 OTM Call (Higher Strike)
Maximum Loss:
Strike Price of Long Call - Strike Price of Short Call - Net Premium Received or Strike Price of Short Put - Strike Price of Long Put - Net Premium Received whichever is higher
Breakeven:
#### How does this strategy work?
Let us assume that a stock ABC is trading at a price of INR 100, to execute an Iron Condor trading strategy we will: Sell 80 Strike Put for INR 2.5 Sell 120 Strike Call for INR 2.5
With a hope that the price will remain within these two strike prices that we booked so that we make a profit. But, due to the risk of unlimited loss, we would protect our positions by: Buy 60 Strike Put for INR 1 Buy 140 Strike Call for INR 1
Collectively these four positions will form the Iron Condor for us:
#### How to implement this strategy?
Let us see how we can execute this strategy in a live market scenario.
For this, I will take the Option for Yes Bank Limited (Ticker – YESBANK) with an expiration date of 28th March 2018 and the current stock price of INR 323.40
Last 1-month stock price movement (source – Google Finance)
Here is the option chain of YESBANK for the expiry date of 28th March 2018.
I will be taking the following positions: Sell 350 Call at INR 3.30 Sell 300 Put at INR 3.40 Buy 370 Call at INR 1.30 Buy 280 Put at INR 1.20
Source: nseindia.com
##### Maximum Profit:
In case the stock doesn’t bounce much and stays between my booked positions i.e. between the price of INR 300 and INR 350. My payoff will be as follows:
So we made money on the initial positions that we booked by Selling the options but we lost them on the positions that we booked for protection. The total payoff turns out to be positive on 4.2 points.
##### Maximum Loss:
Let’s assume that there is a major volatility or the market bumps due to an uncertain event and made the stock to go up to INR 390. So this means that the positions booked as long and short put is out of the money for us. Here is how the payoff will look like in this scenario:
We made INR 3.4 on the short put and lost INR 1.2 on the long put. The short call is 40 points in the money so we made a loss of INR 36.7 (3.30 – 40) and the long call is 20 points in the money which made us INR 18.7 (20 – 1.30). The total tally brings us to a loss of INR 15.8. Remember, the long call that we bought for protection helped us to minimize our loss on the total trade.
Another important thing to note is that this is the maximum we can lose on this trade no matter how far or low the stock goes.
Now let’s see how will be the payoff for a series of underlying price:
We will now use the Python code to show you the payoff summary:
### Import Libraries
import numpy as np
import matplotlib.pyplot as plt
import seaborn
#### Call Payoff
def call_payoff(sT, strike_price, premium):
return np.where(sT > strike_price, sT - strike_price, 0) – premium# Stock price
spot_price = 323.40# Long call
strike_price_long_call = 370
strike_price_short_call = 350
premium_short_call = 3.30# Stock price range at expiration of the call
sT = np.arange(0.5*spot_price,2*spot_price,1)
fig, ax = plt.subplots()
ax.spines['bottom'].set_position('zero')
ax.plot(sT,payoff_long_call,label='Long 370 Strike Call',color='g')
plt.xlabel('Stock Price')
plt.ylabel('Profit and loss')
plt.legend()
#ax.spines['top'].set_visible(False) # Top border removed
#ax.spines['right'].set_visible(False) # Right border removed#ax.tick_params(top=False, right=False) # Removes the tick-marks on the RHS
plt.grid()
plt.show()
payoff_short_call = call_payoff(sT, strike_price_short_call, premium_short_call) * -1.0
fig, ax = plt.subplots()
ax.spines['bottom'].set_position('zero')
ax.plot(sT,payoff_short_call,label='Short 350 Strike Call',color='r')
plt.xlabel('Stock Price')
plt.ylabel('Profit and loss')
plt.legend()
plt.grid()
plt.show()
#### Put Payoff
def put_payoff(sT, strike_price, premium):
return np.where(sT < strike_price, strike_price - sT, 0) – premium# Stock price
spot_price = 323.40# Long put
strike_price_long_put = 280
strike_price_short_put = 300
premium_short_put = 3.40# Stock price range at expiration of the put
sT = np.arange(0.5*spot_price,2*spot_price,1)
fig, ax = plt.subplots()
ax.spines['bottom'].set_position('zero')
ax.plot(sT,payoff_long_put,label='Long 280 Strike Put',color='y')
plt.xlabel('Stock Price')
plt.ylabel('Profit and loss')
plt.legend()
plt.grid()
plt.show()
payoff_short_put = put_payoff(sT, strike_price_short_put, premium_short_put) * -1.0
fig, ax = plt.subplots()
ax.spines['bottom'].set_position('zero')
ax.plot(sT,payoff_short_put,label='Short 300 Strike Put',color='m')
plt.xlabel('Stock Price')
plt.ylabel('Profit and loss')
plt.legend()
plt.grid()
plt.show()
#### Iron Condor Strategy Payoff
payoff = payoff_long_call + payoff_short_call + payoff_long_put + payoff_short_put
fig, ax = plt.subplots()
ax.spines['bottom'].set_position('zero')
ax.plot(sT,payoff_long_call,'--',label='Long 370 Strike Call',color='g')
ax.plot(sT,payoff_short_call,'--',label='Short 350 Strike Call',color='r')
ax.plot(sT,payoff_long_put,'--',label='Long 280 Strike Put',color='y')
ax.plot(sT,payoff_short_put,'--',label='Short 300 Strike Put',color='m')
ax.plot(sT,payoff,label='Iron Condor')
plt.xlabel('Stock Price')
plt.ylabel('Profit and loss')
plt.legend()
plt.grid()
plt.show()
### Next Step
Learn the modelling of option pricing using Black Scholes Option Pricing model and plotting the same for a combination of various options. You can put any number of call and/or put options in this model and use a built-in macro (named ‘BS’) for calculating the BS model based option pricing for each option.
Update: We have noticed that some users are facing challenges while downloading the market data from Yahoo and Google Finance platforms. In case you are looking for an alternative source for market data, you can use Quandl for the same.
Disclaimer: All investments and trading in the stock market involve risk. Any decisions to place trades in the financial markets, including trading in stock or options or other financial instruments is a personal decision that should only be made after thorough research, including a personal risk and financial assessment and the engagement of professional assistance to the extent you believe necessary. The trading strategies or related information mentioned in this article is for informational purposes only.
|
2021-01-19 17:04:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18596288561820984, "perplexity": 3326.326078192012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519600.31/warc/CC-MAIN-20210119170058-20210119200058-00542.warc.gz"}
|