url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://www.varsitytutors.com/act_math-help/how-to-find-the-period-of-the-cosine
|
# ACT Math : How to find the period of the cosine
## Example Questions
### Example Question #1 : How To Find The Period Of The Cosine
A function with period P will repeat on intervals of length P, and these intervals are referred to as periods.
Find the period of the function
.
Explanation:
For the function
the period is equal to
or in this case
which reduces to .
### Example Question #42 : Cosine
A function with period P will repeat on intervals of length P, and these intervals are referred to as periods.
Find the period of the function
.
Explanation:
For the function
the period is equal to
or in this case
which reduces to .
### Example Question #133 : Trigonometry
A function with period P will repeat on intervals of length P, and these intervals are referred to as periods.
Find the period of the function
.
Explanation:
For the function
the period is equal to
or in this case
which reduces to .
### Example Question #1 : How To Find The Period Of The Cosine
A function with period will repeat its solutions in intervals of length .
What is the period of the function ?
Explanation:
For a trigonometric function , the period is equal to . So, for .
### Example Question #2 : How To Find The Period Of The Cosine
A function with period will repeat its solutions in intervals of length .
What is the period of the function ?
Explanation:
For a trigonometric function , the period is equal to . So, for .
### Example Question #3 : How To Find The Period Of The Cosine
A function with period will repeat its solutions in intervals of length .
What is the period of the function ?
|
2019-12-09 04:30:14
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8959571719169617, "perplexity": 844.2660490524911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540517557.43/warc/CC-MAIN-20191209041847-20191209065847-00228.warc.gz"}
|
http://blogs.ch.cam.ac.uk/pmr/2007/09/page/2/
|
# Truth or beauty, continued
Continuing our discussion on whether a chemical strucure diagram is copyrightable.
1. Steven Bachrach Says:
September 24th, 2007 at 9:27 pm ePeter,
I have to take exception to some of your claims. The chemical formula drawing is not the only way of communicating the compound. In fact there are really much better ways of doing this, though not necessarily the best for a human to readily read. The InChI or smiles or 3-D coordinates really capture more information in a more reusable and less likely to be error-ridden way (especially the 3-D coordinates). The chemical formula drawing is not even unique, as we have seen in your examples.
Furthermore, Totally Synthetic had many arbitrary decisions to make in how to represent the structure. I have modified this structure in a few simple ways to make this point:
Note that I have changed the orientation of the terminal isopropyl/OH groups and the way the amide connects to ring A. With regard to ring B, the wedges here are actually NOT how it has to be. Note that the carbons of ring B are not stereocenters. The structure is drawn to try to indicate that ring B sort of π-stacks above ring E. This may or may not be true. Furthermore, the oxygen of the ring could in fact be pointing backwards. In my representation, I decided not to indicate any of this 3-D relationship.
Now I am not claiming that my structure is better than the original. My claim, however, is that Totally Synthetic made some creative decisions in making this presentation, and thus it should be protected.
PMR: "With regard to ring B, the wedges here are actually NOT how it has to be. Note that the carbons of ring B are not stereocenters. " I don't know whether this is true or not. Looking at the 3D structure it seems to me that there are two isomers (not conformers) where TS's wedges show one. It may, however, be that the ring is sufficiently flexible that they interconvert rapidly enough not to be isomers.
More generally, however, there are many reasons why structural diagrams are essential. The diagram above is numbered. The numbers are essential to understand much of the data (spectral assignments, reactivity, etc.) They cannot be held in SMILES, or InChI or, indeed, in anything in common use other than CML (which has support for many sorts of annotation). Neither InChIs nor SMILES are any use for most organometallic compounds, polymers, intermolecules compounds, supermolecules, nanotubes, polymer beads, etc. Many of these things don't have useful 3D coordinates - we are working on polymers and have developed Polymer Markup Language - and we can generate 3D coordinates very nicely, but not many others can.
So I contend that for much of chemistry diagrams are the only method of primary communication. It's actually part of the problem of involving machines. How do we get these things into a formal system without losing information.
That's hard enough without the publishers' lawyers suing us.
# Semantic web : the scream!
I have just blogged Paul Miller's Talis Community Licence and realised that - I think - I used to get a feed from his/Talis blog. So I put it in the Feedreader and found a whole lot of posts on the semantic web (or Semantic Web). Now I had been battling with SPARQL for a day or two trying to make a query with real numbers (e.g.
FILTER (?foo < "1.23"^^xsd:float)
I could NOT get it to work. Finally my colleague Diana Stewart tracked it down to the fact that in some places in RDF you are allowed to use prefixes and in some places you can't. It's almost completely arbitrary. It's not in the simple tutorials. RDF is a place where if you make an inspired guess you will be wrong. The syntax (wrong, the 3-4 syntaxes) are all over the shop. XML,N3, Turtle, ??? It makes me scream.
So I was pleased to see that Jeni Tennison (one of early evangelists of XSLT, coming up with some magic tricks - and goodness you need all the tricks you can get with XSLT) had the same reaction:
# Diazonamide : The Blue CrystalEye Greasemonkey lends a hand
There is some doubt about what the structure of diazonamide A is. Because there is no absolute way of assigning names to structures. We only agree what aspirin is because everyone has been assigning the same structure to it for 100+ years. Many people are careless with names and even more are careless with structure diagrams. Indeed there seems to be a minor industry in drawing some structures wrongly. A year of two back when Nick Day was pioneering the use of InChI he used "staurosporine" as an example. He found lots of structure diagrams and I think there were 19 (sic) different diagrams. Some were frankly "wrong". Others missed out the stereochemistry, others had other problems. And some of these were from suppliers sites (i.e. "labels on bottles").
So how can we be sure? It needs an authority - but which one? Staurosporine is a (potential?) drug, so... WHO drugs? British National Formulary? US National Pharmacopeia? Chemical Abstracts? Beilstein? All of these are pay-to-view. So I cannot look them up (remember I am at home, simulating an interested person, such as a patient). Ah! Pubchem... with 16 entries, and several variations of stereochemistry. Wikipedia has a nice picture ... but this about diazonamide...
On TotSynth's post there's a link to the latest paper (DOI: 10.1021/ja0744448). And following this I find:
PMR: The Blue Eye is NOT part of the abstract - it shows that the Blue Obelisk Greasemonkey has found a crystalEye entry which looks like this:
and here you can see the actual stereochemistry of the diazonamide nucleus (it's not exactly the title compound) so there is virtually no doubt. The diagram on the right is calculated from the 3D coordinates and the layout is through CDK - note the stereo wedges and hatches.
So now I know what some of the stereo is. And because PNAS have made the text Open I can read how it relates to TS's structure. The CDK may not be 100% beautiful, but it should be true (Cue some reader finding it's wrong and a bug in JUMBO, but that's what Open science is about). And you can always pay Chemical Abstracts 6.20 USD to check whether you have got it "right".
So install the Blue Obelisk Grease Monkey (blog post) in your Firefox browser and Open your Eyes to a whole new world of truth and beauty.
# Beauty is truth, truth beauty - and copyrightable?
In (Finding chemical structures - InChIs et al., an amusement) I explored the varied approaches to drawing structures and the problems of representing them. I commented that Totally Synthetic's diagrams were not only the most unambiguous but also the most beautiful. I now regret having done that, as Steve Bachrach has argued that this makes them copyrightable.
1. Steven Bachrach Says:
September 24th, 2007 at 5:23 pm eThis discussion demonstrates why I believe that “structure drawing” falls within the domain of materials than can be copyrighted. What is the difference in all of these representations (let’s agree not to worry about what might be a different stereochemistry at one center)? In the eye of the viewer some of these are “ugly” and some are more aesthetically pleasing. I might argue that Totally Synthetic’s representation is not only clear, it might even be called “pleasing”.Since it is aesthetics (beauty, clarity, etc) that differentiates these drawings, the creator’s choices in how to display this molecule were critical creative acts. It seems to me that this defines work that should fall under copyright protection. The fact that all of these representations refer to the same underlying chemistry does not diminish in the least the creativity involved to create the last structure, and perhaps a total lack of creativity in producing the PubChem drawing.So it seems to me that one should be careful in the re-use of structure drawings. To me these “drawings” are not data, while the connection table, for example, is data.Steven
PMR; Steve is a strong evangelist for OA and we are on the same side but I don't think arguing for copyrighting chemical structure diagrams is helpful. Let's take the analogy of mathematical equations. I could argue that
$e^x = \sum_{n=0}^\infty {x^n \over n!} = \lim_{n \to \infty}\left(\frac{1}{0!} + \frac{x}{1!} + \frac{x^2}{2!} + \cdots + \frac{x^n}{n!}\right).$
(taken from Wikipedia on Euler) was beautiful, while
ex = Σ(&infty;n=0)xn/n! was ugly.
Let's assume that Euler had access to TeX and had published his formula in a journal belonging to the strong-copyright school of thought and that a student has cut and pasted the formula to explain the summation. The publisher could then claim "the formula is beautifully typeset so you must take it down, or retype it". Poor Euler is dead so hasn't any say.
In the same way the chemical formula diagram is the ONLY means of communicating the structure. In TotSynth's case the wedge bonds in ring B are not to make it pretty but to emphasize what the compound actually is
while the PNAS structure tries to do the same but in a much uglier and gritty fashion. There is a real likelihood of confusion as to what the structure actually is:
1. DrZZ Says:
September 24th, 2007 at 12:26 pm eInteresting stuff. Let me add some additional points. One of the structures in PubChem comes from us (DTP/NCI). If you look at the compound record you get the mess you included above. If you navigate to the substance record (click on the CID, on that page look for the Substance: 1 link, when that hit comes up click on the SID, and on that page change the drop down choice for Compound Displayed from PubChem to deposited) you see a much more sensible 2D drawing. In a quick look, I think the difference between the two structures in PubChem is that one of the stereocenters in the NCI deposited structure is unspecified in the other structure. The NCI structure was submitted in 1997 by one of the authors of the original isolation paper. As the structure correction was published in 2001, it is almost certain that the NCI structure contains the original error. I say almost because we have no audit trail in our internal database for structures (at least not easily visible to me). A NSC has a structure, period. There is some possibility that the structure was fixed, but that just overwrites the previous structure. It just reinforces my view that it is extremely important to treat the structure of a substance as one more data point, subject to varied and possibly conflicting values.
It's very easy to get it wrong.
Now if I cut and paste the diagrams and say "this one shows clearly that the ring is sticking up" I might help avoid the wrong compound being given to the wrong patient somewhere down the line. (This is not hypothetical - these are possible drugs). Steve, can you justify a publisher saying - "we'll send the lawyers after you for posting copyright chemical structures"?
Because if so, the C21 will be enormously impoverished. So yes, it's beautiful. but NO it mustn't be copyrighted by publishers.
# Talis licence for Open Data
I used to think Open Data was simple - "facts are not copyrightable" and everything follows. No I am wiser and realise that data are complex and need a lot of attention - fast. So it's very valuable to see groups who are addressing the problem. Here is Paul Miller of Talis (who convened a WWW2007 session on Open Data)
18:11 24/09/2007, Nodalities
In the world of creative works, notions espoused by Lawrence Lessig and others over a number of years are becoming increasingly well understood. A Creative Commons license, for example, is recognised as giving the holder of rights an ability to prospectively grant certain permissions rather than limit use of their work by expecting all comers to request these permissions, again and again. Those rights are not cast aside, removing all opportunities to protect your work, your name, or your potential revenue stream. Rather, you are provided with a means to explicitly declare that your work may be used and reused by others in certain ways without their needing to request permission. Any other use is not forbidden; those uses must simply be negotiated in the 'normal' way... a normal way that also applied to those uses covered by Creative Commons licenses before the advent of those licenses.
Creative Commons licenses are an extension of copyright law, as enshrined in the legal frameworks of various jurisdictions internationally. As such, it doesn't really work terribly well for a lot of (scientific, business, whatever) data... but the absence of anything better has led people to try slapping Creative Commons licenses of various types on data that they wish to share. It will be interesting to see what happens, the first time one of those licenses needs to be upheld via a court!
At Talis, we have an interest in seeing large bodies of structured data available for use. Through the Talis Platform, we offer one means whereby such data may be stored, used, aggregated and mined, although we clearly recognise that similar data may very well also be required in similar contexts.
Recognising that contributors of such data need to be reassured as to the uses to which we - and others - may put their hard work, we spent some time a couple of years ago drafting something then called the Talis Community Licence. This draft licence is based upon protections enshrined in European Law, and has been used 'in anger' for a while to cover contributions of millions of records to one particular application on the Talis Platform.
There has been plenty of talk around 'open data' here on Nodalities, and on our sister blog Panlibus. See, for example, this recent post from Rob Styles. There were also fascinating discussions at the WWW2007 conference earlier this year.
Despite interest in open (or 'linked') data, licenses to provide protection (and, of course, to explicitly encourage reuse) are few and far between. Amongst zealous early adopters, there does seem to be a tendency to either (mis)use a Creative Commons license, to say nothing whatsoever, or to cast their data into the public domain. None of these strategies are fit for application to business-critical data.
Building upon our original work on the TCL, we recently provided funding to lawyers Jordan Hatcher and Charlotte Waelde. They were tasked with validating the principles behind the license, developing an effective expression of those principles that could be applied beyond the database-aware shores of Europe, and working with us to identify a suitable home in which this new licence could be hosted, nurtured, and carried forward for the benefit of stakeholders far outside Talis.
Today, Jordan posted the latest draft of this license (now going by the name 'Open Data Commons'), some rationale, and pointers to various ways in which he - and we - are seeking input and further validation.
As my colleague Rob (again!) has argued, curators of data need an option on the permissions continuum between free-for-all and locked down. The Open Data Commons, née Talis Community Licence, offers that option.
Take a look. Think about how you would use it. Consider what sort of administrative framework you would want behind such a license. Join the conversation.
PMR: First of all many thanks to funding legal work on Open Data. Whatever else we have to remain within the legal framework or we court disaster at a later stage.
There will not be a single approach to this anymore than there is a single Open Source licence. Motivations vary and, even more importantly, data is more varied than software. I know of two other efforts, Science Commons - (in Cambridge US) springing from CC, and the The Open Knowledge Foundation set up by the tireless Rufus Pollock (in Cambridge UK) who invited me to be on the board. We honour this by using the OKFN "Open Data" on
our own CrystalEye. I expect that people will choose different licences to emphasize different policies. (For example I currently use Artistic as my software licence as I don't want the name JUMBO to be misused for derivative works which are not compliant. I might well use BSD elsewhere. and so on).
# CDK's Diazonamide and general thoughts on Openness
Noel O'Blog has suggested that I should use Rajarshi Guha's CDK service to layout the Diazonamide structure (see my post Finding chemical structures - InChIs et al., an amusement)
1. baoilleach Says:
September 24th, 2007 at 7:59 am eFor the record, you can compare with CDK’s SMILES to 2D at:
http://cheminfo.informatics.indiana.edu/~rguha/code/java/cdkws/cdkws.html#sdg
PMR: so here it is:
PMR: I think it's correct. Interpretable. I'd put it on the same level as the Daylight one. One message is that it is difficult for software to layout structures with a 10-ring nucleus.
The point is that CDK is Open Source and can therefore be enhanced by the community. Daylight and the software that Pubchem (?Cactus?, ?Openeye?) use isn't. CDK is joint leader, and we can improve it.
A complementary approach is to start making collections of human-drawn images. The intelligible Chemspider image was hand-drawn by the PNAS authors - I don't know how it got to Chemspider. (Personally I think it's pretty awful - I do not like stereo bonds which are rectangular rather than wedges. Why do people use them. And You only have to scale the image to corrupt this info). So we need an Open collection of chemical structures.
This is not technically difficult but is lathered with copyright madness. Can I reproduce a chemical structure from Nature without permission? I've asked but they haven't got back to me. Can I reproduce a chemical structure diagram from Wiley? I've asked but... ... they haven't got back to me.
It has to be fully Open. Every structure diagram has to be copyright-free and accompanied by metadata that gives provenance and alternative descriptions (names, InChIs, etc.). Is there anywhere that has chemical images that I can download that fulfils all these permissions?
I've found one (sorry for the layout). Here's taxol:
Paclitaxel
β-(benzoylamino)-α-hydroxy-,6,12b-bis
(acetyloxy)-12-(benzoyloxy)-2a,3,4,4a,
5,6,9,10,11,12,12a,12b-dodecahydro-4,11-
dihydroxy-4a,8,13,13-tetramethyl-5-oxo-
7,11-methano-1H-cyclodeca(3,4)benz(1,2-b)
oxet-9-ylester,(2aR-(2a-α,4-β,4a-β,6-β,9-α
(α-R*,β-S*),11-α,12-α,12a-α,2b-α))-
benzenepropanoic acid
And there's lots of data with it that looks like this:
I'll leave you to guess where this is. Clues: It's Open, re-usable, very highly curated, and the first place that students look. That - or a derivative - is where the world's chemistry should reside.
# Finding chemical structures - InChIs et al., an amusement
Totally Synthetic, Chemspider and I have been discussing the value of InChIs in blogs. TS's blog is, of course Openly available under CC licence, and he is widely revered in the community for the beauty and acuuracy of his structural diagrams. This post is a slightly light-hearted voyage through what can be discovered with Toll-Access barriers in place. I leave readers to judge whether TSand Pubmed are up to the ease and value of the information from commercial providers.
I'm reading this from outside the University and I do not have a VPN. This is useful as it shows me what it's like to be an information-impoverished reader. TS blogged today about Diazonamide A , a natural product which was billed as the next big breakthrough in cancer some years ago. (It has 4 reports in Pubmed about its biology, and 26 ones about the chemical synthesis. Taxol has 30,000). Anyway TS has taken the advice of the Blue Obelisk list and managed to put InChIs into his blog.
I'll show his beautiful-as-always structure at the end, but meanwhile I wanted to see how easy it was to find the structure from freely accessible sites. This includes most abstracts (in science it seems to be almost universal to post abstracts in clear, so be grateful).
Wikipedia does not list it, but has the (intriguing and misleading) entry under "Trivial_name":
For example, the most important structural feature of Diazonamide is that it's a nonribosomal peptide, which is denoted by the suffix "amide".
PMR: it might have started as a peptide but I don't think many people would now call it that. (Unless there is another Diazonamide that I don't know of).
So on to the latest synthesis (Magnus, Cheung, Goldberg, Russell, Turnbull and Lynch. JACS, 2007, ASAP. DOI: 10.1021/ja0744448.), remembering I can't read the full text. The abstract is a superb illustration of hanging links (NullPointerExceptions in Java):
Abstract:
During the course of studies on the synthesis of diazonamide A 1, an unusual O-aryl into C-aryl rearrangement was discovered that allows partial control of the absolute stereochemistry of the C-10 quaternary stereogenic center. Treatment of 30 with TBAF/THF gave the O-tyrosine ethers 31 and 32 (1:1), which on heating each separately in chloroform at reflux rearranged to 33 and 34 in ratios of 84:16 and 56:44, respectively. This corresponds to a 70% yield of the correct C-10 stereoisomer 33 and a 30% yield of the wrong C-10 stereoisomer 34. Attempts to convert 34 into 33 by ipso-protonation and equilibration were unsuccessful. Confirmation of the stereochemical outcome of the rearrangement was obtained by converting 33 into 37, an advanced intermediate in the first synthesis of diazonamide A by Nicolaou et al. It was also found that the success of the above rearrangement is sensitive to the protecting group on both the tryptophan nitrogen atom and the tyrosine nitrogen atom.
PMR: What a splendid piece of non-communication! [My comments could apply to many publishers, not just ACS]. Without the full text (which, after considerable perusal will tell us what 1, 30, 31, 32, 33, 34 and 37 are) it's almost meaningless. I am reminded of Alice's comment on Jabberwocky:
"Somehow it seems to fill my head with ideas – only I don't exactly know what they are! However, SOMEBODY killed SOMETHING: that's clear, at any rate -- '"
PMR: and the authors made something from something else...
So off to Pubchem. Many compounds made by synthetic chemists are no in Pubchem because they are of no interest, but Diazonamide is. It has a structural diagram [1]
PMR: Lovely. I think it's correct, but it's not exactly beautiful. like mathematical equations chemical strucures can be pretty or semantic. This is semantically correct and it's probably pretty to jellyfish (this was a marine compound) but not to humans.
So on the InChI. Pubchem tells me that the compound has InChI:
InChI=1/C40H34Cl2N6O6/c1-15(2)27-37-46-29-32(54-37)40-20-9-5-8-19(18-7-6
-10-22-25(18)26(33(41)43-22)31-34(42)48-38(29)53-31)28(20)47-39(40)52-24
-12-11-17(13-21(24)40)14-23(35(50)45-27)44-36(51)30(49)16(3)4/h5-13,15-1
6,23,27,30,39,43,47,49H,14H2,1-4H3,(H,44,51)(H,45,50)/t23-,27-,30-,39-,4
0u/m0/s1/f/h44-45H
The problem is that this is not pretty for blogs as it runs over the line ends and spaces are a problem. So IUPAC are working out new approaches and some of these are discussed by the Blue Obelisk.
There is also a SMILES:
CC(C)C1C2=NC3=C(O2)C45C(NC6=C(C=CC=C64)C7=C8C(=CC=C7)NC(=C8C9=C(N=C3O9)C
l)Cl)OC2=C5C=C(CC(C(=O)N1)NC(=O)C(C(C)C)O)C=C2
which is a linear way of encoding the structure. Let;s go to the Daylight site (they invented SMILES) to see what it looks like:
I think it's correct, and it's certainly a lot better than the Pubchem offering but it's not beauty - except for Shrek.
Let's try Chemical Abstracts. It's got every compound ever made. Maybe they will let me have a free go... (STNEasy) I find:
A free demo! Just what I wanted...
PMR: This is fine, and it points to the same abstract, but I can't get at the structure. Let's try CAS-Number lookup - it will tel me the number and the structure... and there is a free demo as well:
Oh dear... Yes, a free demo, but only if you are looking for caffeine. I get get all I want about caffeine from Wikipedia without paying 6.20 USD. Ah well,
So, off to chemspider which is free. The search for diazonamide A reveals:
10472888 is shown at full size. (There are two more structures but both are equally unreadable). Note that the atom counts of the structures are inconsistent - the actual composition - I think - is that of 4591072. I try to zoom the formula and get a featureless gray square on both IE and Firefox. So I try Jmol (shown right). Now the molecules are three-dimensional but the coordinates in chemspider are those of the 2-D diagram. Personally I regard this as extremely misleading and would NEVER use Jmol for 2D diagrams, but I shan't pursue this here.
So I still don't know what the molecule is. Where else? Perhaps I can use some more abstracts...
And the fourth one on Pubmed hits gold. It's from PNAS:
and it's FREE!!!!!
so we find the structure:
Truth at last. (For non-chemists the exact width of the lines matters, and the pixellation makes it very difficult to be sure. But I'm sure it's correct.
And now what you have been waiting for - Totally Synthetic's structure:
I think you'll agree that the blogosphere is starting to emerge as a serious place to look for chemistry.
[1] pasted directly from the Pubchem site, suggesting we can create an image library for chemical structures
# open access : Thank you American Chemical Society
In my reviews of the practice of Open Access (Author Choice in Chemistry at ACS - and elsewhere?) I pointed out that there were deficiences in access and labelling on Open offerings. I've now had a reply from Dave Martinsen:
Peter,
Thanks for pointing out the problem in accessing ACS AuthorChoice articles. This was a technical glitch which is in the process of being fixed. Please be assured that it is our intention that AuthorChoice material is available without charge from the time it is posted on the web. We believe the solutions we’re putting into place will prevent this access problem from happening again.
Dave
*********************************
David Martinsen
American Chemical Society
1155 16th St. NW
Washington, DC 20036
d_martinsen AT work-it-out
PMR: Thank you Dave (Dave - as I have already mentioned - has been very supportive of new approaches to chemical informatics).
AuthorChoice is a "hybrid Open Access" product produced by the ACS. "Hybrid" only applies to publishers (and sometime specific journals) that are primarily closed (Toll Access, pay-to-read) but where authors may purchase "Open Access" for their specific article. (Many OA publishers require all authors to pay to publish). Every publisher has a different name for their hybrid products and almost all of them offer different rights and restrictions.
As I have said before, the quality of delivery of hybrid Open Access (and related products) is often poor. They are not well labelled, the navigation is poor, and the rights - if any - are often vague and contradictory. Hybrid offerings (as with the ACS) often still require the author to transfer copyright and do not allow full re-use of the article.
I am not (here) criticizing hybrid OA per se (though personally I think it is a distraction and is likely to be ineffective in every way). Nor am I concerned (here) with the price level, though I personally would not believe that I get good value from many publishers (as I require full permissions, including author retention of copyright). What concerned me here was that the reader (and thereby the author) was not getting what they were entitled to.
It is very clear that the OA community MUST insist on clear labelling and must police the practice. Many "OA" publishers are creating unacceptable offerings - either deliberately or probably through laziness and lack of commitment (I call this systemic failure of the industry). I had not intended to embark on any campaign and I am glad to see that others at Berlin5 are interesting in putting in place more formal mechanisms. For example we need a system of labels - but that's not my story to tell.
I don't actually like attacking people (institutions are slightly different). Sometimes my role appears to be that of a gadfly. I didn't know why people use this particular analogy so looked it up in WP and found Gadfly
"Gadfly" is a term for people who upset the status quo by posing upsetting or novel questions, or attempt to stimulate innovation by proving an irritant.
The term "gadfly" was used by Plato to describe Socrates' relationship of uncomfortable goad to the Athenian political scene, which he compared to a slow and dimwitted horse. It was used earlier by the prophet Jeremiah in chapter 46 of his book. The term has been used to describe many politicians and social commentators.
During his defense when on trial for his life, Socrates, according to Plato's writings, pointed out that dissent, like the tiny (relative to the size of a horse) gadfly, was easy to swat, but the cost to society of silencing individuals who were irritating could be very high. "If you kill a man like me, you will injure yourselves more than you will injure me," because his role was that of a gadfly, "to sting people and whip them into a fury, all in the service of truth."
PMR: I'm delighted to know the etymology (or rather the usage). And Perhaps that is sometimes why I like the Socratic approach - posing questions which require definite answers rather than generalities. But, ahem, although it grows here I really don't like hemlock.
# How blogging makes contacts and seeds communities
I mailed yesterday about how blogging links to other blogs and generates new contacts. Here is a direct example:
Jakob Says:
... and ...
From the librarian’s point of view I can tell you that archiving data is probably even more complex then it seems to be. From the computer scientist’s point of view I can tell you that Semantic Web will enlight us easily. From the Open Content movement’s point of view I can tell you that you should just license the data and make it available and usable for anyone - like you said: first make sure THAT the data CAN be used.
PMR: Thanks Jakob. There is a growing number of people like you - we need to link them to generate critical mass. In chemistry we have created the Blue Obelisk community and we have pooled our resources and efforts. This could be done for content systems - informally as well as through institutions - an example is our collaboration with Peter Sefton on authoring tools.
# Does linking to technorati tags generate spam?
In a recent post (blogs, folksonomies and tagging - get going!) I encouraged the Open Access community to start using blogs and tagging. I specifically pointed to Technorati to illustrate the value and showed that some conferences had huge amounts of traffic and others almost none. I gave several examples and gave links to the technorati summary of the posts under given tags. This was based on a particular URL structure.
On revisiting these sites I find that the lists at Technorati have been drastically altered. The berlin5 one has 11 porno spam links. The method a fairly recent one - take the content of a genuine post and do some very crude lexical munging of the words and phrases (I get zillions of these each day sumitted to the blog comments). Somehow they actually linked to sex sites in Cambridge, so maybe they interpret domain names. So it seems the spammers have found my post yesterday and somewhere generated spam content that is either injected into Technorati or has already been linked. AFAICS the genuine links are still there.
Then I looked at www2007 worrying that I would see the same. But whereas there were 300+ links yesterday to www2007, now there are only 6, all half a year old. Was technorati spammed and tried to clean it?
If by linking to Technorati I have unwittingly generated spam I apologize, but this can be done in other ways.
I don't take Technorati counts very seriously - about as seriously as I take ISI citation counts - but it's a useful way of finding people. But maybe we have to be careful about the exact way we use it. I welcome enlightenment.
|
2015-07-03 17:19:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42789068818092346, "perplexity": 1977.79986909253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096208.17/warc/CC-MAIN-20150627031816-00020-ip-10-179-60-89.ec2.internal.warc.gz"}
|
http://www.optimization-online.org/DB_HTML/2007/04/1630.html
|
- A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties Xinwei Liu (mathlxwhebut.edu.cn) Yaxiang Yuan (yyxlsec.cc.ac.cn) Abstract: We present a null-space primal-dual interior-point algorithm for solving nonlinear optimization problems with general inequality and equality constraints. The algorithm approximately solves a sequence of equality constrained barrier subproblems by computing a predictor step and a null space step in every iteration. The $\ell_2$ penalty function is taken as the merit function. Under very mild conditions on predictor steps and approximate Hessians, without assuming any regularity, it is proved that the limit point of the iterate sequence is either a Karush-Kuhn-Tucker point of the barrier subproblem, or a point that is strictly feasible for inequality constraints of the original problem and stationary for minimizing the $\ell_2$ norm of violations of equality constraints, provided the penalty parameter remains bounded; if the penalty parameter tends to infinity, there exists a limit point that is either an infeasible stationary point of minimizing the $\ell_2$ norm of violations of constraints of the original problem, or a Fritz-John point of the original problem. In addition, we analyze the local convergence properties of the algorithm, and prove that by suitably controlling the exactness of predictor steps, selecting the barrier parameter and Hessian approximation, the algorithm generates a superlinearly or quadratically convergent step. The conditions on guaranteeing the positiveness of slack variable vector for a full step are presented. Keywords: Global and local convergences, null-space technique, primal-dual interior-point methods, nonlinear optimization with inequality and equality constraints Category 1: Nonlinear Optimization Category 2: Nonlinear Optimization (Constrained Nonlinear Optimization ) Citation: Research report, Department of Applied Mathematics, Hebei University of Technology, Tianjin, China, March 2007 Download: [PDF]Entry Submitted: 04/02/2007Entry Accepted: 04/02/2007Entry Last Modified: 03/13/2008Modify/Update this entry Visitors Authors More about us Links Subscribe, Unsubscribe Digest Archive Search, Browse the Repository Submit Update Policies Coordinator's Board Classification Scheme Credits Give us feedback Optimization Journals, Sites, Societies Optimization Online is supported by the Mathematical Programming Society and by the Optimization Technology Center.
|
2020-01-28 01:41:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8729891180992126, "perplexity": 1133.307706861038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251737572.61/warc/CC-MAIN-20200127235617-20200128025617-00181.warc.gz"}
|
https://socratic.org/questions/5318702c02bf340ba3cbf5bf
|
# Question bf5bf
Mar 7, 2014
Work is not related to direction, because it is a scalar quantity.
w = F⃗⋅S⃗ =∥F∥∥S∥cosθ
where ∥F∥ and ∥S∥ are the magnitudes of the vectors F⃗ and S⃗ and θ# is the angle between them. The dot product of two vectors is a scalar quantity. It has size but no direction.
|
2020-01-20 20:44:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6371696591377258, "perplexity": 909.3425617811513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599789.45/warc/CC-MAIN-20200120195035-20200120224035-00399.warc.gz"}
|
https://openreview.net/forum?id=SNwH0dDGl7_
|
## Near-Optimal Deployment Efficiency in Reward-Free Reinforcement Learning with Linear Function Approximation
Abstract: We study the problem of deployment efficient reinforcement learning (RL) with linear function approximation under the \emph{reward-free} exploration setting. This is a well-motivated problem because deploying new policies is costly in real-life RL applications. Under the linear MDP setting with feature dimension $d$ and planning horizon $H$, we propose a new algorithm that collects at most $\widetilde{O}(\frac{d^2H^5}{\epsilon^2})$ trajectories within $H$ deployments to identify $\epsilon$-optimal policy for any (possibly data-dependent) choice of reward functions. To the best of our knowledge, our approach is the first to achieve optimal deployment complexity and optimal $d$ dependence in sample complexity at the same time, even if the reward is known ahead of time. Our novel techniques include an exploration-preserving policy discretization and a generalized G-optimal experiment design, which could be of independent interest. Lastly, we analyze the related problem of regret minimization in low-adaptive RL and provide information-theoretic lower bounds for switching cost and batch complexity.
|
2023-03-22 10:06:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4592974781990051, "perplexity": 560.6807641393584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00155.warc.gz"}
|
https://stats.stackexchange.com/questions/379294/estimate-probability-from-sample-frequency-in-a-binomial-distribution?noredirect=1
|
# Estimate probability from sample frequency in a binomial distribution [duplicate]
If I get $$s$$ successes out of $$n$$ trials in a binomial distribution, what is the probability $$p$$ of getting a success in each individual trial?
Presumably $$p = s/n$$, but what if $$s = 0$$ or $$s = n$$? Would the probability $$p$$ really be $$0$$ or $$1$$?
That seems far too certain if the sample size $$n$$ is very small, so is there some correction factor to account for small sample sizes in estimating $$p$$?
• It depends. What is your loss function, prior distribution, or preferred estimation method?
– whuber
Nov 29, 2018 at 0:05
• Uniform prior distribution in $p$ Nov 29, 2018 at 0:09
|
2022-07-07 05:31:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8134664297103882, "perplexity": 305.4585368042932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683683.99/warc/CC-MAIN-20220707033101-20220707063101-00089.warc.gz"}
|
https://wiki.sipeed.com/hardware/en/maixII/M2/other.html
|
# Others
Edit on 2022.08.23
This essay tells additional usages about V831 development board.
After replacing device, the app application in Maixhub image can't be used anymore.
## Replace screen
Up to now this development board supports 1.3 inch, 2.4 inch and 2.8 inch IPS screen, they can be bought from aliexpress in our shop, consult the sale support for more information. And if you need to use other size of screen, you can email to support@sipeed.com for Commercial customization.
### Prepare
• The replace screen and its convert board (Consult sale support for more information)
• M2-Dock
• Latest system image
### Connect board
This board can connect 1.3 inch screen directly, but it can't connect with 2.4 inch or 2.8 inch screen directly because of different line sequence, so it's required to use convert board.
There is a mark 1 on convert board, which notes the direction to connect lcd screen and development board.
The mark 1 on screen is as below:
The mark 1 on convert board is shown as below:
The mark 1 on development board is shown as below:
Connect them as what is shown below
### Change device tree
• update_dtb application is for Allwinner tina linux
• The compiled dtb file can be download from Download station
This dtb file is compiled from kernel, and it's not suggested to be compiled by users because it's a bit difficult.
The rule of dtb file name is shown as following:
File name Fit screen Fit camera
sipeed_2.8_240x320_vs3205.dtb 2.8 inch vs3205
sipeed_2.8_240x320_sp2305.dtb 2.8 inch sp2305
sipeed_2.4_240x320_vs3205.dtb 2.4 inch vs3205
sipeed_2.4_240x320_sp2305.dtb 2.4 inch sp2305
sipeed_1.3_240x240_vs3205.dtb 1.3 inch vs3205
sipeed_1.3_240x240_sp2305.dtb 1.3 inch sp2305
Copy the compiled dtb file into the virtual U-disk, then run following commmand in adb shell.
sync #Refresh contents
update_dtb /dev/mmcblk0 /root/sipeed_240x240_vs3205.dtb
reboot #Restart to apply
Then we succeed changing device tree.
• If your screen displays incorrectly, this means you choose wrond dtb file, just reupdate it to fix this.
Here is a correct display picture.
## Replace camera
Up to now MaixII-Dock development board support sp2305 and vs3205 these two cameras, and they are being sold in our online store, consult our salers for help. If you want to use other camera, we can do commercial customization for you, or you can adapt the drivers by yourself.
Same as replacing screen, we need to update device tree to change driver.
### Prepare
• The replace camera
• MaixII-Dock
• Latest system mirror
### Connect camera
Be careful of your camera direction, if you connect if in a wrong direction, your camera may burn out.
Just make sure the white point in the same place
### Update device tree
update_dtb is a tiny tool for Allwinner tina linux
Put your downloaded dtb file into the virtual disk created bu development board. Then run following commands in adb shell
sync #Refresh content
update_dtb /dev/mmcblk0 /root/sipeed_240x240_vs3205.dtb
reboot #Reboot to apply
Then we succeed changing camera.
This dtb file is compiled from kernel, and it's not suggested to be compiled by users because it's a bit difficult.
File name Fit screen Fit camera
sipeed_2.8_240x320_vs3205.dtb 2.8 inch vs3205
sipeed_2.8_240x320_sp2305.dtb 2.8 inch sp2305
sipeed_2.4_240x320_vs3205.dtb 2.4 inch vs3205
sipeed_2.4_240x320_sp2305.dtb 2.4 inch sp2305
sipeed_1.3_240x240_vs3205.dtb 1.3 inch vs3205
sipeed_1.3_240x240_sp2305.dtb 1.3 inch sp2305
sipeed_1.3_240x240_ov2685.dtb 1.3 inch ov2685
## Toolchain
There is a file name toolchain-sunxi-musl-pack-2021-01-09.tar.xz in the Download station, it's the toolchain for V831 in linux OS.
You can try to use this if you need, but for V831 we suggest use MaixPy3 and Maixhub.
|
2022-10-02 22:38:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24318766593933105, "perplexity": 13522.275684331757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00074.warc.gz"}
|
https://www.physicsforums.com/threads/trig-substitution-with-integration.56776/
|
Trig Substitution with Integration
1. Dec 14, 2004
Tom McCurdy
How would you go about solving
$$\int \frac{\sqrt{1-x^2}}{x^2}$$ ?
I have tried a few things... drawing out triangles... etc but can't seem to get it... I am kind of behind in math because I was gone for awhile because of being sick and presentations.
2. Dec 14, 2004
Hurkyl
Staff Emeritus
Ok, let's take a step back...
Can you do $\int \sqrt{1-x^2} \, dx$?
3. Dec 14, 2004
dextercioby
Try the substitution $x\rightarrow \sin u$.And then part integration.
Daniel
4. Dec 14, 2004
Tom McCurdy
Would that just be
$$x=sin\theta$$
$$\sqrt{1-sin^2\theta$$
$$sin^2=1-cos^2\theta$$
$$\sqrt{1-(1-cos^2\theta)$$
$$\int \sqrt{-cos^2\theta$$
Last edited: Dec 14, 2004
5. Dec 14, 2004
Hurkyl
Staff Emeritus
Yep (assuming you meant cos2 θ, and get your signs right)
6. Dec 14, 2004
Tom McCurdy
hmm alright I am still lost even on your back up step
7. Dec 14, 2004
Pyrrhus
Tom, again i emphatize on the important of the differential... do not forget about putting them on your integrals.
Hurkyl means
$$\sqrt{1 - \sin^2} = \sqrt{\cos^2}$$
8. Dec 14, 2004
PICsmith
As long as you're not in the middle of a test or something you can look it up in integral tables and see what answer they got and from that determine what methods they used. For example the solution to this integral is in a form that looks like it was done by parts, plus it has an inverse sine in it, which hints at trig substitutions as was mentioned by dextercioby.
9. Dec 14, 2004
Tom McCurdy
wow... i missed that competely i need more sleep... so what would I do to for the orignial problem with x^2 in the denominator
10. Dec 14, 2004
Hurkyl
Staff Emeritus
The same thing!
11. Dec 14, 2004
Tom McCurdy
Thats the problem I have a test on all the material I miseed comming up on thursday... and I need to make sure I get a good grade in the class if I want any chance of getting accepted after getting defred from MIT
12. Dec 14, 2004
Tom McCurdy
So would it just come out to be $$\int \frac{\sqrt{cos^2\theta}}{sin^2\theta}$$ ?
13. Dec 14, 2004
dextercioby
Not exactly.U need to transform "dx" as well.That will give another "cosine".
In should be
$$\int \frac{\cos^{2}\theta}{\sin^{2}\theta}d\theta$$.
14. Dec 14, 2004
Tom McCurdy
oh yeah I forgot about that... so it becomes the $$\int tan^2\theta$$
15. Dec 14, 2004
Tom McCurdy
My question is how did you decide to make $x=sinu$
16. Dec 14, 2004
cyby
That is a standard trig substitution, no?
17. Dec 14, 2004
dextercioby
1.First of all it's $$\arctan^{2}\theta$$.
2.Experience at doing integrals??? Actually it was the expression under the radical that led to the natural substitution "sine"/"cosine",just because:
$$1-\sin^{2}\theta =\cos^{2}\theta$$ and another one similar.
Daniel.
Last edited: Dec 14, 2004
18. Dec 14, 2004
Tom McCurdy
Alright I am going to assume $x=4sinu$ for
$$\int \frac{x^3}{\sqrt{x^2-4}}dx$$
Therefore
$$dx=cos\theta$$
$$\int \frac{4sin\theta^3}{\sqrt{4sin\theta^2-4}}*cos\theta$$
$$4 \int \frac{sin\theta^3}{\sqrt{sin^2\theta-1}} *cos\theta$$
$$4 \int \frac{sin\theta^3}{\sqrt{cos^2}}*cos\theta$$
$$4\int sin\theta^3 d\theta$$ ?????
Did i do it right???
Last edited: Dec 14, 2004
19. Dec 14, 2004
Parth Dave
it depends on what form it is in, if you have:
(a^2 - x^2) : x = asin(theta)
(a^2 + x^2) : x = atan(theta)
(x^2 - a^2) : x = asec(theta)
20. Dec 14, 2004
dextercioby
Hold on,what u posted is wrong:Û don't need 4,but 2 (check the denominator)
$$dx=2 \cos\theta d\theta$$
$$x^{3}=8\sin^{3}\theta$$
Daniel.
|
2017-07-26 12:45:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8940772414207458, "perplexity": 2939.878520736187}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426161.99/warc/CC-MAIN-20170726122153-20170726142153-00300.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-2nd-edition/chapter-5-integration-review-exercises-page-395/22
|
## Calculus: Early Transcendentals (2nd Edition)
$\frac{1}{3}\ln (y^3+27)+c$
$\int\frac{y^2}{y^3+27}dy$ Use u-substitution... $u=y^3+27$ $dy=\frac{du}{3y^2}$ $\int\frac{y^2}{u}\frac{du}{3y^2}=\frac{1}{3}\int\frac{1}{u}du$ $=\frac{1}{3}\ln u+c$ substitute out u... $\frac{1}{3}\ln (y^3+27)+c$
|
2019-11-12 05:17:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9322839379310608, "perplexity": 12204.693314535662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664752.70/warc/CC-MAIN-20191112051214-20191112075214-00452.warc.gz"}
|
https://math.stackexchange.com/questions/68991/inverse-laplace-transform-s-domain
|
# Inverse Laplace Transform -s domain
How can I find the inverse Laplace transforms of the following function?
$$G\left(s\right)=\frac{2(s+1)}{s(s^2+s+2)}$$
I solved so far. After that, how do I do?
$$\frac{1}{s}+\frac{1}{s^2+s+2}+\frac{s}{s^2+s+2}=G\left( s \right)$$
To find the inverse Laplace transforms of the function $\ G\left(s\right)=\dfrac{2\left(s+1\right)}{s\left(s^2+s+2\right)}$
You have solved up to partial fraction form of $G\left(s\right)$ i.e
$$G\left(s\right)=\frac{1}{s}+\frac{1}{s^2+s+2}+\frac{s}{s^2+s+2}$$ Now taking the Laplace inverse \begin{align}\mathcal{L^{-1}}\left\{G\left(s\right)\right\}&=\mathcal{L^{-1}}\left\{\frac{1}{s}+\frac{1}{s^2+s+2}+\frac{s}{s^2+s+2}\right\}\\&=\mathcal{L^{-1}}\left\{\frac{1}{s}\right\}+\mathcal{L^{-1}}\left\{\frac{1}{s^2+s+2}\right\}+\mathcal{L^{-1}}\left\{\frac{s}{s^2+s+2}\right\}\\ \end{align} Now the first term $$\mathcal{L^{-1}}\left\{\frac{1}{s}\right\}=1 \\ \qquad$$
Second term is
\begin{align}\mathcal{L^{-1}}\left\{\frac{1}{s^2+s+2}\right\}&=\mathcal{L^{-1}}\left\{\frac{1}{s^2+2\times s\times\frac{1}{2}+\left(\frac{1}{2}\right)^2+2-\left(\frac{1}{2}\right)^2}\right\}\\ &=\mathcal{L^{-1}}\left\{\frac{1}{\left(s+\frac{1}{2}\right)^2+\left(\frac{\sqrt{7}}{2}\right)^2}\right\}\\ &=\mathcal{L^{-1}}\left\{\frac{2}{\sqrt{7}}\frac{\frac{\sqrt{7}}{2}}{\left(s+\frac{1}{2}\right)^2+\left(\frac{\sqrt{7}}{2}\right)^2}\right\}\\ &=\frac{2}{\sqrt{7}}\mathcal{L^{-1}}\left\{\frac{\frac{\sqrt{7}}{2}}{\left(s+\frac{1}{2}\right)^2+\left(\frac{\sqrt{7}}{2}\right)^2}\right\}\\ &=\frac{2}{\sqrt{7}}e^{\frac{-1}{2}t}\sin\left(\frac{\sqrt{7}}{2}t\right)\\ \left[\text{since}\quad \mathcal{L^{-1}}\left\{\frac{b}{\left(s-a\right)^2+b^2}\right\}=e^{at}\sin\left(bt\right)\right] \end{align}
Third term is
\begin{align}\mathcal{L^{-1}}\left\{\frac{s}{s^2+s+2}\right\}&=\mathcal{L^{-1}}\left\{\frac{s}{s^2+2.s.\frac{1}{2}+\left(\frac{1}{2}\right)^2+2-\left(\frac{1}{2}\right)^2}\right\}\\ &=\mathcal{L^{-1}}\left\{\frac{s+\frac{1}{2}-\frac{1}{2}}{\left(s+\frac{1}{2}\right)^2+\left(\frac{\sqrt{7}}{2}\right)^2}\right\}\\ &=\mathcal{L^{-1}}\left\{\frac{\left(s+\frac{1}{2}\right)}{\left(s+\frac{1}{2}\right)^2+\left(\frac{\sqrt{7}}{2}\right)^2}\right\}-\mathcal{L^{-1}}\left\{\frac{\frac{1}{2}}{\left(s+\frac{1}{2}\right)^2+\left(\frac{\sqrt{7}}{2}\right)^2}\right\}\\ &=e^{\frac{-1}{2}t}\cos\left(\frac{\sqrt{7}}{2}t\right)-\frac{1}{2}\mathcal{L^{-1}}\left\{\frac{1}{\left(s+\frac{1}{2}\right)^2+\left(\frac{\sqrt{7}}{2}\right)^2}\right\}\\ &=e^{\frac{-1}{2}t}\cos\left(\frac{\sqrt{7}}{2}t\right)-\frac{1}{2}\frac{2}{\sqrt{7}}e^{\frac{-1}{2}t}\sin\left(\frac{\sqrt{7}}{2}t\right) \quad \left(\text{By Second term}\right)\\ &=e^{\frac{-1}{2}t}\cos\left(\frac{\sqrt{7}}{2}t\right)-\frac{1}{\sqrt{7}}e^{\frac{-1}{2}t}\sin\left(\frac{\sqrt{7}}{2}t\right)\\ \qquad \qquad \qquad \left[\text{since}\quad \mathcal{L^{-1}}\left\{\frac{\left(s-a\right)}{\left(s-a\right)^2+b^2}\right\}=e^{at}\cos\left(bt\right)\right]. \end{align} Collecting all three terms, \begin{align}\mathcal{L^{-1}}\left\{G\left(s\right)\right\}&=1+\frac{2}{\sqrt{7}}e^{\frac{-1}{2}t}\sin\left(\frac{\sqrt{7}}{2}t\right)+e^{\frac{-1}{2}t}\cos\left(\frac{\sqrt{7}}{2}t\right)-\frac{1}{\sqrt{7}}e^{\frac{-1}{2}t}\sin\left(\frac{\sqrt{7}}{2}t\right)\\ &=1+\frac{1}{\sqrt{7}}e^{\frac{-1}{2}t}\sin\left(\frac{\sqrt{7}}{2}t\right)+e^{\frac{-1}{2}t}\cos\left(\frac{\sqrt{7}}{2}t\right)\\ \end{align}
Use: $$\mathcal{LT}_s\left( \sin(\alpha x) \mathrm{e}^{-b x} \right) = \int_0^\infty \sin(\alpha x) \mathrm{e}^{-b x} \mathrm{e}^{-s x} \mathrm{d} x = \frac{\alpha}{(s+b)^2 + \alpha^2}$$ $$\mathcal{LT}_s\left( \cos(\alpha x) \mathrm{e}^{-b x} \right) = \int_0^\infty \cos(\alpha x) \mathrm{e}^{-b x} \mathrm{e}^{-s x} \mathrm{d} x = \frac{b+s}{(s+b)^2 + \alpha^2}$$ Completing the square: $s^2+s+2 = \left(s+\frac{1}{2}\right)^2 + \frac{7}{4}$. Therefore, decompose the image of Laplace transform accordingly: $$\frac{2 (s+1)}{s \left(s^2+s+2\right)}=\frac{1}{s}-\frac{s+\frac{1}{2}}{\left(s+\frac{1}{2}\right) ^2+\frac{7}{4}}+\frac{3}{\sqrt{7}}\frac{\sqrt{7}/2}{\left(\left(s+\frac{1}{2}\right)^2+\frac{7}{4}\right)}$$ Compare with the answer by WolframAlpha.
Your function $$G(s) = \frac{2(s+1)}{s(s^2 + s + 2)}$$ has the partial fraction decomposition $$G(s) = \frac{A}{s} + \frac{Bs + C}{(s^2 + s + 2)}$$The way how I will solve this is to use complex analysis. Your original function can be broken down into three distinct linear factors by solving for the zeros of $$(s^2 + s + 2)$$ using quadratic formula. $$\frac{b\pm\sqrt{b^2 -4ac}}{2a}$$ with a = b = 1 and c = 2. The zeroes are $$s = -\frac{1}{2}+j\frac{\sqrt7}{2}$$ and $$s = -\frac{1}{2}-j\frac{\sqrt7}{2}$$. Your original function can now be written as $$G(s) = \frac{2(s+1)}{s(s-\frac{1}{2}+j\frac{\sqrt7}{2})(s-\frac{1}{2}-j\frac{\sqrt7}{2})}$$ fixing the brackets, we have $$G(s) = \frac{2(s+1)}{s(s-[\frac{1}{2}-j\frac{\sqrt7}{2}])(s-[\frac{1}{2}+j\frac{\sqrt7}{2}])}$$ The new partial fraction decomposition now is $$G(s) = \frac{A}{s}+\frac{B}{[s-(\frac{1}{2}-j\frac{\sqrt7}{2})]}+\frac{C}{[s-(\frac{1}{2}+j\frac{\sqrt7}{2})]}$$ Using cover-up method, if we set $$s = 0$$ we will be able to solve for $$A$$. Thus $$A = \frac{2(s+1)}{[s-(\frac{1}{2}-j\frac{\sqrt7}{2})][s-(\frac{1}{2}+j\frac{\sqrt7}{2})]}$$ $$A = \frac{2}{2} = 1$$ Utilize FOIL method to get this. To solve for B and C, we set $$s = (\frac{1}{2}-j\frac{\sqrt7}{2})$$ and $$s = (\frac{1}{2}+j\frac{\sqrt7}{2})$$ respectively. Since B and C are complex conjugates, C's imaginary part will be the negative of B. Solving for B, $$B = \frac{2(s+1)}{s[s-(\frac{1}{2}+j\frac{\sqrt7}{2})]}$$ $$B = \frac{(1 + j\sqrt7)}{{-j\sqrt7}}$$ $$B = 1 -j\frac{\sqrt7}{7}$$
Because C is the complex conjugate of B, C is therefore $$C = 1 +j\frac{\sqrt7}{7}$$ Thus, your original function is decomposed into $$G(s) = \frac{1}{s} + \frac{1 - j\frac{\sqrt7}{7}}{[s-(\frac{1}{2} - j\frac{\sqrt7}{2})]} + \frac{1 + j\frac{\sqrt7}{7}}{[s-(\frac{1}{2} +j \frac{\sqrt7}{2})]}$$ The inverse Laplace Transform of this is $$g(t) = 1 + (1 - j\frac{\sqrt7}{7})e^{(\frac{1}{2} - j\frac{\sqrt7}{2})t} + (1 + j\frac{\sqrt7}{7})e^{(\frac{1}{2}+j\frac{\sqrt7}{2})t}$$ Do note that this is not yet the final answer. What I will be doing is I will make use of sine and cosine's equivalent complex functions, namely: $$sin(\theta) = \frac{1}{j2}(e^{j\theta} - e^{-j\theta})$$ $$cos(\theta) = \frac{1}{2}(e^{j\theta} + e^{-j\theta})$$ We will then factor out $$e^{\frac{1}{2}t}$$ and distribute $$e^{j\frac{\sqrt7}{2}t}$$ to each terms. Upon doing so, we get
$$g(t) = 1 + e^{\frac{1}{2}t}[(1 - j\frac{\sqrt7}{7})e^{-j\frac{\sqrt7}{2}t}] + [(1 + j\frac{\sqrt7}{7})e^{j\frac{\sqrt7}{2}t}]$$ $$g(t) = 1 + e^{\frac{1}{2}t}[(e^{-j\frac{\sqrt7}{2}t} - j\frac{\sqrt7}{7}e^{-j\frac{\sqrt7}{2}t}) + (e^{j\frac{\sqrt7}{2}t} + j\frac{\sqrt7}{7}e^{j\frac{\sqrt7}{2}t}) ]$$ We will then collect the real and imaginary parts of $$e^{j\frac{\sqrt7}{2}t}$$ and $$e^{-j\frac{\sqrt7}{2}t}$$ because we will be able to get the inverse laplace transform from those. By doing so, we have $$g(t) = 1 + e^{\frac{1}{2}t}[(e^{j\frac{\sqrt7}{2}t} + e^{-j\frac{\sqrt7}{2}t}) - j\frac{\sqrt7}{7}(e^{j\frac{\sqrt7}{2}t} - e^{-j\frac{\sqrt7}{2}t})]$$ $$g(t) = 1 + 2e^{\frac{1}{2}t}cos(\frac{\sqrt7}{2}t) - j\frac{\sqrt7}{7}(j2)e^{\frac{1}{2}t}sin(\frac{\sqrt7}{2}t)$$ $$g(t) = 1 + 2e^{\frac{1}{2}t}cos(\frac{\sqrt7}{2}t) + \frac{2\sqrt7}{7}e^{\frac{1}{2}t}sin(\frac{\sqrt7}{2}t)$$
|
2022-07-03 02:26:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 32, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9420796036720276, "perplexity": 229.29421633051467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104209449.64/warc/CC-MAIN-20220703013155-20220703043155-00640.warc.gz"}
|
https://www.physicsforums.com/threads/change-of-temperature-of-ideal-gas.312484/
|
# Change of temperature of ideal gas
1. May 7, 2009
### hubber26
1. The problem statement, all variables and given/known data
We have 1m3 of Ar gas M=39.948g/mol, pressure=101300Pa, temperature T=298K. We put the energy Q=1000J in that gas. Whats the temperature after we put 103J. (c=0.5203x103)
2. Relevant equations
Im not interested in solution. I tried to solve it on two ways using formula for internal energy of ideal gas $$\Delta$$U = $$\frac{3}{2}$$nR$$\Delta$$T and the formula Q = mc$$\Delta$$T
However solutions are not the same, why?
3. The attempt at a solution
pV=nRT => n = pV / RT = 40.89mol
m = Mn = 1633g = 1.633kg
$$\Delta$$U = $$\frac{3}{2}$$nR$$\Delta$$T
thus $$\Delta$$T = Q / $$\frac{3}{2}$$nR = 1000 / (1.5*40.89*8.314) = 1.961°K
however
Q = mc$$\Delta$$T
thus $$\Delta$$T = Q / mc = 1000 / (1.633*0.5203*103) = 1.177°K
2. May 7, 2009
### Count Iblis
You took c to be c_p while you need c_v here.
3. May 7, 2009
### hubber26
right. Thanks very much!
|
2017-12-15 08:59:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.62217777967453, "perplexity": 4919.737797547572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948567785.59/warc/CC-MAIN-20171215075536-20171215095536-00338.warc.gz"}
|
https://brilliant.org/discussions/thread/wanted-help-in-these/
|
×
# Wanted help in these
1) For the system shown in the figure the acceleration of the mass $$m_{4}$$ immediately after the lower thread $$x$$ is cut will be (assume the threads and springs are light and there is no friction) The diagram is above mass $$m_{4}$$ is with string and above it is $$m_{3}$$ towards left of $$m_{3}$$ is $$m_{1}$$.
2) If $$f(x)$$ is an increasing function and $$\large{\displaystyle \int_{0}^{x} 2xf^{2}(t).dt=(\displaystyle \int_{0}^{x} 2f(x-t).dt)^{2}}$$ for $$f(1)=1$$ and $$f(x)$$ is continuous for $$x>0$$. Find $$f(x)$$
3) Let $$f(x)=sin^{4} \pi x$$ and $$g(x)=lnx$$. Find the whole area bounded by $$y=f(x)$$ and $$y=g(x)$$ and $$x=0$$
Note by Tanishq Varshney
1 year, 5 months ago
Sort by:
Nice problems.
1. Just write free body diagrams for each block. What you need to find is the force in the right hand side spring. Once the string is cut, the force in the right hand side spring is the only thing other than gravity acting on $$m_4$$. After solving using F.B.D, you will get the force.
2. Firstly, substitute $$x-t$$ in place of $$t$$ in the RHS integral. Then take derivative w.r.t $$x$$ on both sides. Use the given equation and substitute RHS in place of LHS wherever necessary after taking derivative. If done right, you will now be having an equation in which you have $$\int _{ 0 }^{ x }{ f\left( x \right) dx }$$, $$f(x)$$ and $$x$$ only. You can now write this as a differential equation where $$y=\int _{ 0 }^{ x }{ f\left( x \right) dx }$$. Since the differential equation is of 2nd degree, you will have to use quadratic formula. Finally, after solving, you will get two solutions, but only one of them satisfies all conditions given in problem.
3. $$f(x)$$ and $$g(x)$$ intersect only once and that is at $$x=1$$. Hence the required answer is $$\int _{ 0 }^{ 1 }{( \sin ^{ 4 }{ \pi x } +\log { x }) }dx = \frac{11}{8}$$
@Tanishq Varshney · 1 year, 5 months ago
but in the 3) they have five solutions. why we took only $$0$$ to $$1$$ · 1 year, 5 months ago
That is correct, but there the areas are not also bounded by $$x=0$$. · 1 year, 5 months ago
ok, now i understood, can u please elaborate 2) one · 1 year, 5 months ago
Answers i have are 1) $$[\frac{m_{3}+m_{4}-m_{1}-m_{2}}{m_{4}}]g$$ $$\quad$$ 2) $$f(x)=x^{\sqrt{2}+1}$$ $$\quad$$ 3) $$\frac{11}{8}$$ · 1 year, 5 months ago
Let the tension in the above string be $$T$$. And let the extension of left and right springs be $$x$$ and $$x_{2}$$.
From the free body diagrams we get
$$T=m_{1}g+kx$$.....................(1)
$$kx=m_{2}g$$...............................(2)
$$m_{3}g+kx_{2}=T$$............................(3)
From 1 and 2 we get $$T=(m_{1}+m_{2})g$$
From (2)
$$kx_{2}=(m_{1}+m_{2}-m_{3})g$$
Now when the lower string is cut then only gravitational force and force spring due to string will act on it.
Let the acceleration of $$m_{4}$$ be $$a$$ be in downward direction---
So $$m_{4}g-kx_{2}=m_{4}a$$
So $$a=\frac { g(m_{ 4 }+m_{ 3 }-m_{ 1 }-m_{ 2 }) }{ m_{ 4 } }$$ · 1 year, 5 months ago
Thanx for the solution $$\ddot \smile$$ · 1 year, 5 months ago
You are welcome!! :) · 1 year, 5 months ago
You can solved it orally , by using the newton's 2nd law , directly : Since Tension's is internal forces ...
$${ F }_{ ext }=ma$$
system is moving with same acceleration , ideal strings , inextinsible , so
$${ F }_{ ext }=ma\\ { m }_{ 3 }g+{ m }_{ 4 }g-{ m }_{ 1 }g-{ m }_{ 2 }g={ m }_{ 4 }a$$
hence the answer.... · 1 year, 5 months ago
|
2016-10-25 03:20:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8701841831207275, "perplexity": 526.3992438423141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719877.27/warc/CC-MAIN-20161020183839-00372-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/471712/charged-object-shrinking-in-scale-to-volume-0-causes-electric-field-becoming-c/471826
|
# Charged object shrinking in scale to volume $0$ causes electric field becoming constant
How to state mathematically rigorously the fact that when the volume of a charged object of any shape approaches $$0$$ the electric field on the surface of a spherical region centered at the object is constant in magnitude and normal to the surface in direction? I'm not quite satisfied with the explanation that it becomes a point charge.
Mathematically speaking, I wish to know why it is that $$\lim_{V\to0^+}\mathbf{E}(x,y,z;V)=\frac{Q}{\epsilon_0\unicode{x222F}_{\partial\Omega}1\operatorname{d}\mathbf{A}}\hat{n}=\frac{Q}{4\epsilon_0\pi r^2}\hat{n}\quad\forall x,y,z$$ where $$\mathbf{E}$$ dependes on $$V$$, the volume of the charged object, shrinking in scale.
• You need also not only the volume shrink to 0, but the whole object became small (contained in a ball of radius $\epsilon$), otherwise you may get a thin wire or a plane – patta Apr 10 at 11:25
If I move a finite charge by a small amount, I expect to affect the far-away electric field by a small amount. Approaching zero movement, will approach zero change in the far field.
Now imagine your object, shrinked to small size. Make it rotate around its centre; the charges move a little, hence the far field changes little. It can rotate in all directions, that implies a spherical symmetry, that is the field of a point charge (or a spherical distribution of charges).
I would frame the question like this. Let $$\rho_1\left(\mathbf{r}\right)$$ be some charge density that vanishes outside the sphere of radius $$\mathbf{R}$$. I will now use a unitless constant $$\alpha>0 \in \mathbb{R}$$ to define:
$$\rho_\alpha\left(\mathbf{r}\right)=\rho_1\left(\mathbf{r}\cdot \alpha\right)$$ clearly, letting $$\alpha \gg 1$$ squeezes the charge density.
The scalar potential due to this charge density is (using electrostatics):
$$\phi_\alpha\left(\mathbf{r}\right) = \frac{1}{4\pi\epsilon_0}\int d^3 r' \frac{\rho_\alpha\left(\mathbf{r}'\right)}{\left|\mathbf{r}-\mathbf{r}'\right|}= \frac{1}{4\pi\epsilon_0}\int d^3 r' \frac{\rho_1\left(\alpha\cdot\mathbf{r}'\right)}{\left|\mathbf{r}-\mathbf{r}'\right|}$$
We can change the integration variable to $$\boldsymbol{\zeta}=\alpha\mathbf{r}'$$:
$$\phi_\alpha\left(\mathbf{r}\right) = \frac{1}{4\pi\epsilon_0}\int d^3 \zeta \frac{\rho_1\left(\boldsymbol{\zeta}\right)}{\left|\alpha\cdot\mathbf{r}-\boldsymbol{\zeta}\right|}$$
Now, assuming $$\alpha r\gg R$$:
$$\phi_\alpha\left(\mathbf{r}\right) \approx \frac{1}{4\pi\epsilon_0}\left[\frac{1}{\alpha r}\int d^3 \zeta\,\, \rho_1\left(\boldsymbol{\zeta}\right) + \frac{1}{\alpha^2 r^2}\int d^3 \zeta\,\, \left(\mathbf{\hat{r}}.\boldsymbol{\zeta}\right)\rho_1\left(\boldsymbol{\zeta}\right)+\dots\right]$$
So in the limit of $$\alpha\to \infty$$ (small volume for $$\rho$$) or $$r\to\infty$$ (observer far away), one gets:
$$\phi_\alpha\left(\mathbf{r}\right) \approx \frac{1}{4\pi\epsilon_0}\left[\frac{1}{\alpha r}\int d^3 \zeta\,\, \rho_1\left(\boldsymbol{\zeta}\right)\right]$$
which is independent of orientation of $$\mathbf{r}$$, i.e. spherically symmetric potential -> uniform radial field.
• I think I need a little bit of explanation from $\phi_\alpha\left(\mathbf{r}\right) = \frac{1}{4\pi\epsilon_0}\int d^3 \zeta \frac{\rho_1\left(\boldsymbol{\zeta}\right)}{\left|\alpha\cdot\mathbf{r}-\boldsymbol{\zeta}\right|}$ to $\phi_\alpha\left(\mathbf{r}\right) \approx \frac{1}{4\pi\epsilon_0}\left[\frac{1}{\alpha r}\int d^3 \zeta\,\, \rho_1\left(\boldsymbol{\zeta}\right) + \frac{1}{\alpha^2 r^2}\int d^3 \zeta\,\, \left(\mathbf{\hat{r}}.\boldsymbol{\zeta}\right)\rho_1\left(\boldsymbol{\zeta}\right)+\dots\right]$. Also do you mean $r>\frac{R}{\alpha}$? – Sayako Hoshimiya Apr 11 at 12:24
• It's Taylor expansion. In general $f=\frac{1}{\left|\mathbf{a}-\boldsymbol{\zeta}\right|}=\frac{1}{a}\cdot\left(1+\left(\frac{\zeta}{a}\right)^2-2\left(\frac{\zeta}{a}\right)\left(\mathbf{\hat{a}.\boldsymbol{\hat{\zeta}}}\right)\right)^{-1/2}$. Now expand for small $\left(\frac{\zeta}{a}\right)$, keep only first two terms, then $a\to r\alpha$ – Cryo Apr 12 at 4:22
• I corrected my error with $\alpha r\gg R$. Thanks – Cryo Apr 12 at 4:25
|
2019-11-13 17:32:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8513804078102112, "perplexity": 266.9230341631532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667319.87/warc/CC-MAIN-20191113164312-20191113192312-00068.warc.gz"}
|
https://mindthetest.wordpress.com/2017/08/07/removing-latex-commands-using-python-re-module/
|
# Removing latex commands using Python “re” module
Recently I had to sanitize lines in a .tex file where a `\textcolor` command had been used.
The command was being used the following way: `{\textcolor{some_color}{text to color}}`.
The main problem was that the command could have appeared any number of times in a line, so I couldn’t apply the command a set number of times.
Also, given any color could have been used, a simple “blind replace” was clearly not a good weapon in this case.
I therefore resorted to applying a reg ex recursively until the line was cleaned of any `\textcolor` command.
In a nutshell:
```def discolor(line):
regex = re.compile('(.*?){\textcolor\{.*?\}(\{.*?\})\}(.*)')
while True:
try:
line = ''.join(re.search(regex, line).groups())
except AttributeError:
return line```
The key part here is that we match not only the text inside the `\textcolor` command, but also what comes before and after (the two `(.*?)` blocks). We return them all until there are any left: when that happens, accessing `.groups()` will raise an `AttributeError`, which we catch and use as sentinel to know when to return.
|
2018-02-22 03:17:34
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8662936091423035, "perplexity": 1546.0377061589234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813883.34/warc/CC-MAIN-20180222022059-20180222042059-00663.warc.gz"}
|
http://www.ck12.org/book/CK-12-Math-Analysis/section/5.4/
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# 5.4: Cross Products
Difficulty Level: At Grade Created by: CK-12
## Learning objectives
• Calculate the cross product of a pair of vectors.
• Apply the Right Hand Rule to correctly identify the direction of the vector product.
• Use the cross product to identify the direction perpendicular to a plane.
## Introduction
In the last section we discussed the dot product of two vectors. In this section we will discuss the other form of vector multiplication. The dot product of two vectors produced a scalar value; the cross product of the same two vectors will produce a vector quantity having a direction perpendicular to the original two vectors.
## Cross Product
The cross product of two vector quantities is another vector whose magnitude varies as the angle between the two original vectors changes. The cross product therefore sometimes referred to as the vector product of two vectors. The magnitude of the cross product represents the area of the parallelogram whose sides are defined by the two vectors, as shown in the figure below. Therefore, the maximum value for the cross product occurs when the two vectors are perpendicular to one another, but when the two vectors are parallel to one another the magnitude of the cross product is equal to zero.
Like the dot product, the cross product must satisfy several important properties of multiplication. The cross product is distributive,
\begin{align*}\overrightarrow{A} \times (\overrightarrow{B} + \overrightarrow{C}) = \overrightarrow{A} \times \overrightarrow{B} + \overrightarrow{A} \times \overrightarrow{C}\end{align*}
But, unlike the dot product, the cross product is anti-commutative
\begin{align*}\overrightarrow{A} \times \overrightarrow{B} = -\overrightarrow{B} \times \overrightarrow{A}\end{align*}
## Computing the Cross Product
The algebraic form of the cross product equation is more complicated than that for the dot product. For two 3D vectors \begin{align*}\overrightarrow{A}\end{align*} and \begin{align*}\overrightarrow{B}\end{align*},
\begin{align*}\overrightarrow{A} \times \overrightarrow{B} = \left \langle (A_2 B_3 - A_3 B_2),(A_3 B_1 - A_1 B_3), (A_1 B_2 - A_2 B_1)\right \rangle\end{align*}
Another way to describe the process is to say that the cross product is the multiplication of one vector by the component of the other vector which is perpendicular to the first vector. In the diagram below are two vectors, A and B. A perpendicular line has been drawn radially outward from B towards A to create a right triangle with A as the hypotenuse.
The component of \begin{align*}\overrightarrow{A}\end{align*} which is perpendicular to \begin{align*}\overrightarrow{B}\end{align*} is given by A sin θ so the magnitude of the cross product can be written as \begin{align*}| \overrightarrow{A} \times \overrightarrow{B} | = \overrightarrow{A} (\overrightarrow{B} \mbox{sin}\ \theta) = | \overrightarrow{A} | | \overrightarrow{B} | \ \mbox{sin}\ \theta\end{align*}
Likewise, the component of \begin{align*}\overrightarrow{B}\end{align*} which is parallel to \begin{align*}\overrightarrow{A}\end{align*} is given by B sin θ, so the cross product \begin{align*}| \overrightarrow{B} \times \overrightarrow{A} | = | \overrightarrow{B} | (| \overrightarrow{A} | \ \mbox{sin}\ \theta ) = | \overrightarrow{A} | | \overrightarrow{B} | \ \mbox{sin}\ \theta\end{align*}
The value of the cross product is maximized when the two vectors are perpendicular and zero when the two vectors are parallel to one another. When a vector is crossed with itself, the result is the zero vector since a vector has no component perpendicular to itself.
\begin{align*}| \overrightarrow{A} \times \overrightarrow{A} | = | \overrightarrow{A} | (| \overrightarrow{A} | \ \mbox{sin}\ \theta ) = | \overrightarrow{A} | | \overrightarrow{A} | \ \mbox{sin}\ 0 = 0\end{align*}
Also, the cross product of any vector with the zero vector is equal to zero since the magnitude of the zero vector is itself equal to 0.
We mentioned earlier, that the direction of the cross product is perpendicular to the plane defined by the two crossed vectors. For example, the cross product of two vectors in the x-y plane will be parallel to the z-axis. This still leaves two possible directions for the cross product, though: either \begin{align*}+ \hat{z}\end{align*} or \begin{align*}-\hat{z}\end{align*}.
We use a right-hand-rule to indicate the direction of the cross product. Position the thumb and index finger of your right hand with the first vector along your thumb and the second vector along your index finger. Your middle finger, when extended perpendicular to your palm, will indicate the direction of the cross product of the two vectors.
As you can see in the diagram above, \begin{align*}\overrightarrow{A} \times \overrightarrow{B}\end{align*} is along \begin{align*}+\hat{z}\end{align*} (coming up out of the page) while \begin{align*}\overrightarrow{B} \times \overrightarrow{A}\end{align*} is along \begin{align*}-\hat{z}\end{align*} (going down into the page) and \begin{align*}\overrightarrow{A} \times \overrightarrow{B} = - \overrightarrow{B} \times \overrightarrow{A}\end{align*}
Example: Calculate the cross product of the two vectors shown below.
Solution: Use the components of the two vectors to determine the cross product.
\begin{align*}\overrightarrow{A} \times \overrightarrow{B} = \left \langle (A_y B_z - A_z B_y), (A_zB_x - A_x B_z), (A_x B_y - A_y B_x)\right \rangle\end{align*}
Since these two vectors are both in the x-y plane, their own z-components are both equal to 0 and the vector product will be parallel to the z axis.
\begin{align*}\overrightarrow{A} \times \overrightarrow{B} = \left \langle [(3 \cdot 0) - (0 \cdot 2)], [(0 \cdot -4) - (2.5 \cdot 0)], [(2.5 \cdot 2) - (3 \cdot -4)]\right \rangle\end{align*}
\begin{align*}\overrightarrow{A} \times \overrightarrow{B} = \left \langle [(0) - (0)], [(0) - (0)], [(5) - (-12)]\right \rangle = \left \langle 0,0,(5 + 12)\right \rangle = \left \langle 0,0,17\right \rangle\end{align*}
We can check our answer using the sine version of the cross product, but first we need to know the angle between the two vectors. We can use the dot product to find θ, following the procedure in the first Example in the previous section. First use the components to find the dot product.
\begin{align*}\overrightarrow{A} \times \overrightarrow{B} = A_x B_x + A_y B_y + A_z B_z = (2.5 * -4) + (3 * 2) + (0 * 0) = -10 + 6 + 0 =\end{align*} \begin{align*}-4\end{align*}
Then find the magnitudes of the two vectors:
\begin{align*}| \overrightarrow{A} | = \sqrt{A^2_x + A^2_y + A^2_z} = \sqrt{2.5^2 + 3^2 + 0^2} = \sqrt{6.25 + 9 + 0} = \sqrt{15.25}\end{align*}
\begin{align*}| \overrightarrow{B} | = \sqrt{B^2_x + B^2_y + B^2_z} = \sqrt{(-4)^2 + 2^2 + 0^2} = \sqrt{16 + 4 + 0} = \sqrt{20}\end{align*}
Then use these magnitudes with the cosine version of the dot product to find θ.
\begin{align*}\overrightarrow{A} \times \overrightarrow{B} = | A | | B |\ \mbox{cos}\ \theta\end{align*}
\begin{align*}-4 = \sqrt{15.25} \sqrt{20}\ \mbox{cos}\ \theta\end{align*}
\begin{align*}\mbox{cos}\ \theta = \frac{-4} {\sqrt{305}} \approx \frac{-4} {17.5} = -0.229\end{align*}
\begin{align*}\theta = 103^\circ\end{align*}
Now use the sine of this angle and the two magnitudes to determine the cross product:
\begin{align*}| \overrightarrow{A} \times \overrightarrow{B} | = | \overrightarrow{A} | | \overrightarrow{B} | \ \mbox{sin}\ \theta\end{align*}
\begin{align*}| \overrightarrow{A} \times \overrightarrow{B} | = \sqrt{15.25} \sqrt{20}\ \mbox{sin}\ 103^\circ = \sqrt{305}\ \mbox{sin}\ 103^\circ = 17\end{align*}
This is the same answer that we obtained from the component notation, which is good. We use the RHR to determine the direction of the vector product. If you place your thumb along vector A and your forefinger along vector B, your middle finger will point along \begin{align*}+\hat{z}\end{align*} and \begin{align*}| \overrightarrow{A} \times \overrightarrow{B} | = \left \langle 0, 0, 17\right \rangle\end{align*}
## The Normal Vector
We can use the cross product and the definition of the unit vector to determine the direction which is perpendicular to a plane. In the previous example, the cross product of the two vectors had a magnitude given by
\begin{align*}| \overrightarrow{A} \times \overrightarrow{B} | = \sqrt{15.25}\ \sqrt{20}\ \mbox{sin}\ 103^\circ = \sqrt{305}\ \mbox{sin}\ 103^\circ = 17\end{align*}
and we used the right hand rule to show that it pointed along \begin{align*}+\hat{z}\end{align*} such that
\begin{align*}\overrightarrow{A} \times \overrightarrow{B} = \left \langle0, 0, 17\right \rangle\end{align*}
In general, we can define a normal vector, \begin{align*}\hat{n}\end{align*}, which has a unity magnitude (i.e. magnitude equal to one) and which is perpendicular to a plane occupied by a pair of vectors, U and V.
\begin{align*}\hat{n} = \frac{\overrightarrow{U} \times \overrightarrow{V}} {|\overrightarrow{U} \times \overrightarrow{V}|}\end{align*}
For the vectors A and B in the previous problem
\begin{align*}\hat{n} = \frac{\overrightarrow{A} \times \overrightarrow{B}} {|\overrightarrow{A} \times \overrightarrow{B}|} = \frac{\left \langle 0, 0, 17 \right \rangle} {17} = \left \langle \frac{0} {17}, \frac{0} {17}, \frac{17} {17} \right \rangle = \left \langle 0, 0, 1\right \rangle\end{align*}
which is in the \begin{align*}+ \hat{z}\end{align*} direction, as shown above using the right hand rule.
Example: The diagram shows two vectors A and B which define a plane passing through the origin. Use these two vectors to determine the normal vector to this plane. \begin{align*}\overrightarrow{A} = \left \langle 3, 0, 4\right \rangle\end{align*} and \begin{align*}\overrightarrow{B} = \left \langle 5, 10, 0\right \rangle\end{align*}
Solution: The normal vector is defined by
\begin{align*}\hat{n} = \frac{\overrightarrow{U} \times \overrightarrow{V}} {|\overrightarrow{U} \times \overrightarrow{V}|}\end{align*}
In this case, we obtain
\begin{align*}\hat{n} = \frac{\overrightarrow{A} \times \overrightarrow{B}} {|\overrightarrow{A} \times \overrightarrow{B}|}\end{align*}
Use the component version of the cross-product equation to find the components of \begin{align*}\overrightarrow{A} \times \overrightarrow{B}\end{align*}
\begin{align*}\overrightarrow{A} \times \overrightarrow{B} = \left \langle (A_y B_z - A_z B_y), (A_z B_x - A_x B_z), (A_x B_y - A_y B_x)\right \rangle\end{align*}
\begin{align*}\overrightarrow{A} \times \overrightarrow{B} = \left \langle [(0 \cdot 0) - (4 \cdot 10)], [(4 \cdot 5) - (3 \cdot 0)], [(3 \cdot 10) - (0 \cdot 5)]\right \rangle\end{align*}
\begin{align*}\overrightarrow{A} \times \overrightarrow{B} = \left \langle (0 - 40), (20 - 0), (30 - 0)\right \rangle = \left \langle -40, 20, 30\right \rangle\end{align*}
Next, calculate the magnitude of the cross product, \begin{align*}| \overrightarrow{A} \times \overrightarrow{B}|\end{align*}
\begin{align*}| \overrightarrow{A} \times \overrightarrow{B}| = \sqrt{(-40)^2 + 20^2 + 30^2} = \sqrt{1600 + 400 + 900} = \sqrt {2900} =\end{align*} \begin{align*}53.8516\end{align*}
\begin{align*}\hat{n} = \frac{\overrightarrow{A} \times \overrightarrow{B}} {|\overrightarrow{A} \times \overrightarrow{B}|} = \frac{\left \langle -40, 20, 30\right \rangle} {53.9} = \left \langle \frac{-40} {53.9}, \frac{20} {53.9}, \frac{30} {53.9}\right \rangle = \left \langle -0.743, 0.371, 0.557\right \rangle\end{align*}
## Lesson Summary
One of the two ways to multiply vector quantities is the Vector Product. The vector product, also known as the cross product, multiplies one vector by the component of the second vector which is perpendicular to the first. The result of a scalar product of two vectors is always a vector quantity which is perpendicular to the plane defined by the first two vectors. There are two ways to calculate the dot product: \begin{align*}\overrightarrow{A} \times \overrightarrow{B} = \left \langle(A_y B_z - A_z B_y), (A_z B_x - A_x B_z), (A_x B_y - A_y B_x)\right \rangle\end{align*} and the magnitude of the cross product is given by \begin{align*}| \overrightarrow{A} \times \overrightarrow{B}| = | A | | B | \ \mbox{sin}\ \theta\end{align*}. These two versions of the cross product can be use to determine the angle between two vectors. The cross product can also be used to identify the direction perpendicular to a plane.
## Practice Problems
1. Determine the magnitude and direction of the cross product \begin{align*}\overrightarrow{F} \times \overrightarrow{r}\end{align*} for the two vectors \begin{align*}\overrightarrow{F} = \left \langle 2, 3, 4\right \rangle\end{align*} and \begin{align*}\overrightarrow{r} = \left \langle 7, 6, 5\right \rangle\end{align*}. Then use the cross product to determine the angle between the two vectors.
2. Determine the magnitude of the cross product of the two vectors shown below.
3. A plane passing through the origin is defined by the two vectors, \begin{align*}\overrightarrow{W} = \left \langle 4, 5, 2\right \rangle\end{align*} and \begin{align*}\overrightarrow{L} = \left \langle 8, 1, 9\right \rangle\end{align*}. Determine the equation of a unit vector representing a direction perpendicular to this plane.
4. Determine the area of a parallelogram whose sides are defined by the vectors \begin{align*}\overrightarrow{w} = \left \langle 85, 89, 91\right \rangle\end{align*} and \begin{align*}\overrightarrow{h} = \left \langle 67, 70, 88\right \rangle\end{align*}, lengths measured in centimeters.
5. Determine the magnitude of the cross product of the two vectors \begin{align*}\overrightarrow{f} = \left \langle 3, 13, 11\right \rangle\end{align*} and \begin{align*}\overrightarrow{g} = \left \langle 9, 6, 15\right \rangle\end{align*}.
6. Determine the equation for the unit vector perpendicular to the plane defined by the two vectors \begin{align*}\overrightarrow{a} = \left \langle 2, 7, 4\right \rangle\end{align*} and \begin{align*}\overrightarrow{b} = \left \langle 0, 5, 1\right \rangle\end{align*}.
7. Determine the area of the parallelogram whose sides are defined by \begin{align*}\overrightarrow{R} = \left \langle 27, 39, 52\right \rangle\end{align*} and \begin{align*}\overrightarrow{T} = \left \langle 44, 26, 17\right \rangle\end{align*}, lengths measured in millimeters.
8. Determine the magnitude and direction of the cross-product of these two vectors.
Show Hide Details
Description
Tags:
Subjects:
Search Keywords:
Date Created:
Feb 23, 2012
|
2016-05-30 06:24:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 68, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996322393417358, "perplexity": 1001.9215544192874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049288709.66/warc/CC-MAIN-20160524002128-00086-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://dsnielsen.com/2017/06/07/borel-determinacy/
|
# Borel Determinacy
The proof of Borel determinacy doesn’t seem to have the best reputation, as it’s both rather long, quite technical and it’s really easy to lose track of what’s going on. I’ve noticed that the same proof can be presented in a more structural setting, making the core ideas of the proof be slightly clearer. I’ll try here to present what’s going on in the proof, using the structural framework of games I set up in my previous post. The full proof can be found in my determinacy project.
In my previous post I only considered games in which integers were played, but it’s a curious fact that the proof of Borel determinacy needs to consider all possible games to work. Let’s therefore write $G_X(T,A)$ for the game in which the two players play elements $x\in X$ for any set $X$, $T\subseteq X^{<\omega}$ is the pruned tree of legal moves and $A\subseteq X^\omega$ is the payoff set. Our previous games $G(T,A)$ are therefore simply $G_\omega(T,A)$. All right, so far so good. Let’s recall what Borel determinacy actually says.
Theorem (Martin). Every Borel game $G_X(T,A)$ is determined.
As most proofs of determinacy, the strategy is to come up with auxiliary simple games which are determined, and somehow transfer this fact to the game in question – this is where the coverings come into play. Martin defines that a covering $f:G_Y(U)\to G_X(T)$ unravels a set $A\subset[T]$ if $\tilde\pi_f^{-1}[A]$ is clopen in $[U]$. A game $G_X(T,A)$ is then said to be unraveled if there exists a covering $f:G_Y(U)\to G_X(T)$ unraveling $A$. By playing around with the definition of covering and being unraveled, we get the following fact, which is the reason why we care about unraveled games.
Proposition. Every unraveled game is determined.
The problem is thus reduced to showing that every Borel game is unraveled. This is done inductively, starting with closed ($\bf\Pi^0_1$) games and then inductively showing that every $\bf\Pi^0_\xi$ game is unraveled, for every $\xi<\omega_1$. The ‘closed case’ is a direct argument, producing an explicit covering of an arbitrary closed game. Here’s a sketch of how it’s constructed.
Given any game $G_X(T)$ we will define an auxiliary game $G_Y(U)$ with a covering $f:G_Y(U)\to G_X(T)$ unraveling the closed set $A\subseteq[T]$. Since the game is already closed we need to enforce an “open” condition, which is to say that we want to modify $G_X(T)$ so that if player I wins, he will already have won at a finite stage. The way this is done is by making the two players at round $k<\omega$ play a set of strategies, which they’re then required to follow for the rest of the game.
This means that the game is really over at round $k$, in that if player I (resp player II) wins, then this was already known in the $k$‘th round. We can then produce the first part of the covering $\pi_f:U\to T$ as simply forgetting this extra strategy-information. Constructing the second part is done by considering a series of cases, which I’ll omit here. This finishes the sketch of the following.
Proposition. Every closed game is unraveled.
That finishes the “induction start”. For the limit levels of the induction we need to improve the above-mentioned result. Note that the auxiliary game didn’t depend on which $k<\omega$ we chose, so we really showed an ostensibly stronger property. We say that a covering $f:G_Y(U)\to G_X(T)$ is a k-covering if $T$ and $U$ agree up to level $2k$, and that $\pi_f$ is the identity up to this level.
What the above proof then shows is that given any closed game $G_X(T,A)$ and $k<\omega$ we can find a $k$-covering that unravels $A$. The reason why we care about this strengthening is that the $k$-coverings allow us to ensure the existence of certain inverse limits of games, which we will need in our induction.
Proposition (Existence of inverse limits). Let $k<\omega$ and $f_{i+1}:G_{X_{i+1}}(T_{i+1})\to G_{X_i}(T_i)$ be a $(k+i)$-covering for every $i<\omega$. Then there’s a game $G_X(T)$ and $(k+i)$-coverings $F_i:G_X(T)\to G_{X_i}(T_i)$ for every $i<\omega$ such that $f_{i+1}\circ F_{i+1}=F_i$, and $G_X(T)$ is the universal such game.
The “universal” statement at the end means that if $\mathcal G$ is another game with $(k+i)$-coverings $H_i:\mathcal G\to G_{X_i}(T_i)$ such that $f_{i+1}\circ H_{i+1}=H_i$ then there’s a unique covering $H:\mathcal G\to G_X(T)$ satisfying $F_i\circ H=H_i$ for every $i<\omega$. In other words, $G_X(T)$ is the “supremum” of the $G_{X_i}(T_i)$‘s.
For the inductive argument fix some $\xi<\omega_1$ and let’s assume that we’ve shown that $\bf\Pi^0_\eta$ games, and thus also $\bf\Sigma^0_\eta$ games, are $k$-unraveled for all $k<\omega$ and $\eta<\xi$. Let $G_X(T,A)$ be a $\bf\Sigma^0_\xi$ game, meaning that $A$ is a countable union of $\Pi^0_\eta$ sets $A_n$ for $\eta<\xi$.
By assumption we get a $k$-covering $f_0:G_{X_1}(T_1)\to G_{X_0}(T)$ unravelling $A_0$ where $X_0:=X$, $A_0:=A$ and \$latex T_0:=T, and recursively
$f_i:G_{X_{i+1}}(T_{i+1})\to G_{X_i}(T_i)$
is a $(k+i)$-covering unravelling $\tilde\pi_{f_{i-1}}^{-1}\circ\cdots\circ\tilde\pi_{f_1}^{-1}[A_i]$, which exists as $\bf\Pi^0_\eta$ is closed under continuous preimages for $\eta<\xi$.
We can then take the inverse limit $G_Y(U):=\varprojlim_n G_{X_n}(T_n)$ with $k$-coverings $F_i:G_Y(U)\to G_{X_i}(T_i)$. Now $F_0:G_Y(U)\to G_X(T)$ is a $k$-covering unravelling every $A_i$, since
$\tilde\pi_{F_0}^{-1}[A_i]=\tilde\pi_{F_i}^{-1}\circ\tilde\pi_{f_{i-1}}^{-1}\circ\cdots\circ\tilde\pi_{f_1}^{-1}[A]$
and $\tilde\pi_{F_i}$ is continuous. Then $\tilde\pi_{F_0}^{-1}[A]=\bigcup_n\tilde\pi_{F_0}^{-1}[A_n]$ is open, so we get a $k$-covering $H:G_Z(V)\to G_Y(U)$ $k$-unravelling $\tilde\pi_{F_0}^{-1}[A]$. But now
$F_0\circ H:G_Z(V)\to G_X(T)$
is a $k$-covering unravelling $A$ and we’re done. QED
So given any $\bf\Pi^0_\eta$ game $G_X(T,A)$ we find a linear system of coverings of length $\eta\omega$ that collectively unravel $A$. The existence of this linear system requires that we use the axiom of replacement $\eta$ many times, and this was shown by Friedman (1971) to be necessary as well.
Friedman, Harvey (1971). “Higher set theory and mathematical practice”. Annals of Mathematical Logic. 2 (3): 325–357.
|
2018-01-24 05:50:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 100, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8387790322303772, "perplexity": 276.2846126619071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084893397.98/warc/CC-MAIN-20180124050449-20180124070449-00148.warc.gz"}
|
https://www.intmath.com/blog/learn-math/what-is-a-shifting-function-12579
|
Search IntMath
Close
450+ Math Lessons written by Math Professors and Teachers
5 Million+ Students Helped Each Year
1200+ Articles Written by Math Educators and Enthusiasts
Simplifying and Teaching Math for Over 23 Years
# What is a Shifting Function
By Kathleen Cantor, 03 Apr 2021
Most people have seen some basic graphs before. Graphs are pictorial representations of data and values along axes. By understanding basic graphs and how to apply translations to them, you'll realize that each new graph is a variation of the old one. It is not a completely different graph than you've ever seen before. Understanding these translations will allow you to quickly recognize and sketch a new function without resorting to drawing points.
This brings us to the meaning of shifting functions. Shifting functions don't change the size and shape of the graph but rather its position.
## Common Functions
A shift is an addition or subtraction to the x or f(x) component. When you shift a function, you're basically changing the position of the graph of the function. A vertical shift raises or lowers the function as it adds or subtracts a constant to each y coordinate, while the x coordinate remains the same. A horizontal shift moves the function right or left since it adds or subtracts a constant to each x coordinate while keeping the Y coordinate unchanged. You can combine vertical and horizontal shifts in a single expression. If the constants are grouped with x, then the shift is horizontal; otherwise, it is vertical.
Common functions include:
• Constant functions: y = c
• Linear functions: y = x
• Quadratic functions: y = x^2
• Cubic function: y = x^3
• Absolute value function: y = |x|
• Square root function: y = sqrt(x)
f(x) = x^2
Then, you make some related graphs such as
g(x) = -x^2 - 4x + 5
h(x) = x^2 - 3x - 4
k(x) = (x + 4)^2
In each case, the basic parabolic shape is the same. The only difference is where the vertex is and whether it is right-side up or upside down. If you've been doing hands-on graphing, you've probably started to notice some relationships between the equation and the graph. The topic of function transformations makes these relationships more explicit.
#### Moving Up and Down
Let's start by looking at the function symbols for the basic quadratic equation.
f(x) = x^2
A function transformation, or translation, is a fancy way of saying that you change the equation a bit so that the graph moves.
To move the function up, add to the function: f(x) + b is f(x) moved up by b units. The same is true for shifting the function downward; f(x) - b is f(x) shifted downward by b units.
We can add a 3 to the basic quadratic equation f(x) = x^2, going from the basic quadratic function x^2 to the transformed function x^2 + 3.
This moves the function up three units.
#### Shifting to the Left and the Right
To move a function left or right, a constant is added or subtracted from x, respectively.
Let’s look at the equation y = (x + 3)^2
In this graph, f(x) shifts three units to the left. Now, instead of graphing f(x) we are graphing f(x + 3). This means the equation is now y = (x + 3)^2, and the original graph is shifted three units to the left.
When moving a function to the left, always add to the function's argument: f(x + b) shifts f(x) b units to the left. The reverse is true for moving to the right, f(x - b) is always f(x) shifted b units to the right.
#### Reflected Function
Functions flipped over the x-axis and mirrored across the y-axis are called reflections. You find the equation for functions reflected over the x-axis by taking the original function's negative value.
If you reflect f(x) = x^2 + 2x - 3 over the x-axis, it becomes f(x) = -(x^2) - 2x + 3. This always works for flipping a function upside down.
To illustrate how this transformation works, remember that f(x) is the same as y. By adding a minus sign to everything, you change all positive (upper axis) y values to negative (lower axis) y values, and vice versa. Any points on the x-axis will stay as they are, and only the off-axis points will move.
To reflect a function across the y-axis, let's consider the cubic function g(x) = x^3 + x^2 - 3x 1.
If you replace the x from the original function with -x, you get
g(-x) = (-x)^3 + (-x)^2 - 3(-x) – 1
g(-x) = -x^3 + x^2 - (-3x) – 1
g(-x) = -x^3 + x^2 + 3x – 1
This transformation will flip the original graph across the y-axis. Any point on the y-axis stays on the y-axis; only off-axis points change sides.
#### Function Conversion/Transformation Rules
f(x) + b shifts the function up by b units.
f(x) – b shifts the function down by b units.
f(x + b) shifts the function to the left by b units.
f(x - b) shifts the function to the right by b units.
-f(x) reflects the position of the function on the x-axis (i.e., inverted).
f(-x) reflects the position of the function on the y-axis (i.e., left and right swapped).
## Real-Life Examples of Shifting Functions
If the sun is shining, water tends to reflect everything. If you look at the mountains behind the lake from a distance, you will see that the mountains are reflected downward. In mathematical terms is reflected over the x-axis creating the same image, only flipped.
Changes in mirrors. When you look into a mirror, you see an accurate reflection of yourself. Most of us look in the mirror every day and are not even aware of this reflection.
A moving car. The car demonstrates transformation because it is moving, but it does not change in size or shape. It also does not flip sideways or upside down.
Be the first to comment below.
### Comment Preview
HTML: You can use simple tags like <b>, <a href="...">, etc.
To enter math, you can can either:
1. Use simple calculator-like input in the following format (surround your math in backticks, or qq on tablet or phone):
a^2 = sqrt(b^2 + c^2)
(See more on ASCIIMath syntax); or
2. Use simple LaTeX in the following format. Surround your math with $$ and $$.
$$\int g dx = \sqrt{\frac{a}{b}}$$
(This is standard simple LaTeX.)
NOTE: You can mix both types of math entry in your comment.
From Math Blogs
|
2023-03-20 13:51:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5694338083267212, "perplexity": 989.0423566667986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943483.86/warc/CC-MAIN-20230320114206-20230320144206-00523.warc.gz"}
|
http://uu.diva-portal.org/smash/resultList.jsf?af=%5B%5D&aq=%5B%5B%7B%22personId%22%3A%22authority-person%3A16996%22%7D%5D%5D&aqe=%5B%5D&aq2=%5B%5B%5D%5D&language=en&query=
|
uu.seUppsala University Publications
Change search
Refine search result
12 1 - 50 of 87
Cite
Citation style
• apa
• ieee
• modern-language-association
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
Rows per page
• 5
• 10
• 20
• 50
• 100
• 250
Sort
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Issued (Oldest first)
• Created (Oldest first)
• Last updated (Oldest first)
• Disputation date (earliest first)
• Disputation date (latest first)
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Issued (Oldest first)
• Created (Oldest first)
• Last updated (Oldest first)
• Disputation date (earliest first)
• Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
• 1.
Univ. Paris-Sud, CNRS, Universit´e Paris-Saclay.
Univ. Paris-Sud, CNRS, Universit´e Paris-Saclay. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Probability Theory.
L2 well-posedness of boundary value problems and the Kato square root problem for parabolic systems with measurable coefficients2016In: Journal of the European Mathematical Society (Print), ISSN 1435-9855, E-ISSN 1435-9863Article in journal (Refereed)
We introduce a first order strategy to study boundary value problems of parabolic systems with second order elliptic part in the upper half-space. This involves a parabolic Dirac operator at the boundary. We allow for measurable time dependence and some transversal dependence in the coefficients. We obtain layer potential representations for solutions in some classes and prove new well-posedness and perturbation results. As a byproduct, we prove for the first time a Kato estimate for the square root of parabolic operators with time dependent coefficients. This considerably extends prior results obtained by one of us under time and transversal independence. A major difficulty compared to a similar treatment of elliptic equations is the presence of non-local fractional derivatives in time.
• 2.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
University of Arkansas. University of Bologna. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Harnack estimates for degenerate parabolic equations modeled on the subelliptic $p-$Laplacian2014In: Advances in Mathematics, ISSN 0001-8708, E-ISSN 1090-2082, Vol. 257, p. 25-65Article in journal (Refereed)
We establish a Harnack inequality for a class of quasi-linear PDE modeled on the prototype\begin{equation*} \partial_tu= -\sum_{i=1}^{m}X_i^\ast ( |\X u|^{p-2} X_i u)\end{equation*}where $p\ge 2$, $\ \X = (X_1,\ldots, X_m)$ is a system of Lipschitz vector fields defined on a smooth manifold $\M$ endowed with a Borel measure $\mu$, and $X_i^*$ denotes the adjoint of $X_i$ with respect to $\mu$. Our estimates are derived assuming that (i) the control distance $d$ generated by $\X$ induces the same topology on $\M$; (ii) a doubling condition for the $\mu$-measure of $d-$metric balls and (iii) the validity of a Poincar\'e inequality involving $\X$ and $\mu$. Our results extend the recent work in \cite{DiBenedettoGianazzaVespri1}, \cite{K}, to a more general setting including the model cases of (1) metrics generated by H\"ormander vector fields and Lebesgue measure; (2) Riemannian manifolds with non-negative Ricci curvature and Riemannian volume forms; and (3) metrics generated by non-smooth Baouendi-Grushin type vector fields and Lebesgue measure. In all cases the Harnack inequality continues to hold when the Lebesgue measure is substituted by any smooth volume form or by measures with densities corresponding to Muckenhoupt type weights.
• 3.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Probability Theory.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Probability Theory.
Boundary behavior of solutions to the parabolic p-Laplace equation2019In: Analysis & PDE, ISSN 2157-5045, E-ISSN 1948-206X, Vol. 12, no 1, p. 1-42Article in journal (Refereed)
We establish boundary estimates for non-negative solutions to the $p$-parabolic equation in the degenerate range $p>2$. Our main results include new parabolic intrinsic Harnack chains in cylindrical NTA-domains together with sharp boundary decay estimates. If the underlying domain is $C^{1,1}$-regular, we establish a relatively complete theory of the boundary behavior, including boundary Harnack principles and Hölder continuity of the ratios of two solutions, as well as fine properties of associated boundary measures. There is an intrinsic waiting time phenomena present which plays a fundamental role throughout the paper. In particular, conditions on these waiting times rule out well-known examples of explicit solutions violating the boundary Harnack principle.
• 4. Avelin, Benny
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Boundary estimates for solutions to operators of $p$-Laplace type with lower order terms2011In: Journal of Differential Equations, ISSN 0022-0396, E-ISSN 1090-2732, Vol. 250, no 1, p. 264-291Article in journal (Refereed)
• 5. Avelin, Benny
Optimal doubling, Reifenberg flatness and operators of p-Laplace type2011In: Nonlinear Analysis, ISSN 0362-546X, E-ISSN 1873-5215, Vol. 74, no 17, p. 5943-5955Article in journal (Refereed)
In this paper we consider operators of p-Laplace type of the form ∇·A(x,∇u) = 0. ConcerningA we assume, for p ∈ (1,∞) fixed, an appropriate ellipticity type condition, H¨older continuityin x and that A(x, ) = ||p−1A(x, /||) whenever x ∈ Rn and ∈ Rn \ {0}. Let ⊂ Rn be abounded domain, let D be a compact subset of . We say that ˆu = ˆup,D, is the A-capacitaryfunction for D in if ˆu ≡ 1 on D, ˆu ≡ 0 on @ in the sense of W1,p0 () and ∇·A(x,∇ˆu) = 0 in \D in the weak sense. We extend ˆu to Rn \ by putting ˆu ≡ 0 on Rn \ . Then there existsa unique finite positive Borel measure ˆμ on Rn, with support in @, such thatZ hA(x,∇ˆu),∇i dx = −Z dˆμ whenever ∈ C∞0 (Rn \ D).In this paper we prove that if is Reifenberg flat with vanishing constant, thenlimr→0infw∈∂ˆμ(B(w, r))ˆμ(B(w, r))= limr→0supw∈∂ˆμ(B(w, r))ˆμ(B(w, r))= n−1,for every , 0 < ≤ 1. In particular, we prove that ˆμ is an asymptotically optimal doublingmeasure on @.
• 6. Avelin, Benny
Estimates for Solutions to Equations of p-Laplace type in Ahlfors regular NTA-domains2014In: Journal of Functional Analysis, ISSN 0022-1236, E-ISSN 1096-0783, Vol. 266, no 9, p. 5955-6005Article in journal (Refereed)
• 7.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics.
Wolff-Potential Estimates and Doubling of Subelliptic p-harmonic measures2013In: Nonlinear Analysis, ISSN 0362-546X, E-ISSN 1873-5215, Vol. 85, p. 149-159Article in journal (Refereed)
Let be a system of C vector fields in Rn satisfying Hörmander’s finite rank condition and let Ω be a non-tangentially accessible domain with respect to the Carnot–Carathéodory distance d induced by X. We prove the doubling property of certain boundary measures associated to non-negative solutions, which vanish on a portion of Ω, to the equation
Given p, 1<p<, fixed, we impose conditions on the function A=(A1,…,Am):Rn×RmRm, which imply that the equation is a quasi-linear partial differential equation of p-Laplace type structured on vector fields satisfying the classical Hörmander condition. In the case p=2 and for linear equations, our result coincides with the doubling property of associated elliptic measures. To prove our result we establish, and this is of independent interest, a Wolff potential estimate for subelliptic equations of p-Laplace type.
• 8.
University of Washington, Seattle, USA.
University of Missouri, Columbia, USA. Instituto de Ciencias Matematicas, Madrid, Spain. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Probability Theory. University of Washington, Seattle, USA.
A new characterization of chord-arc domains2017In: Journal of the European Mathematical Society (Print), ISSN 1435-9855, E-ISSN 1435-9863, Vol. 19, no 4, p. 967-981Article in journal (Refereed)
We show that if Ω⊂Rn+1, n≥1, is a uniform domain (also known as a 1-sided NTA domain), i.e., a domain which enjoys interior Corkscrew and Harnack Chain conditions, then uniform rectifiability of the boundary of Ω implies the existence of exterior corkscrew points at all scales, so that in fact, Ω is a chord-arc domain, i.e., a domain with an Ahlfors-David regular boundary which satisfies both interior and exterior corkscrew conditions, and an interior Harnack chain condition. We discuss some implications of this result for theorems of F. and M. Riesz type, and for certain free boundary problems.
• 9.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Probability Theory.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Probability Theory.
A unified deep artificial neural network approach to partial differential equations in complex geometries2018In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 317, p. 28-41Article in journal (Refereed)
In this paper, we use deep feedforward artificial neural networks to approximate solutions to partial differential equations in complex geometries. We show how to modify the backpropagation algorithm to compute the partial derivatives of the network output with respect to the space variables which is needed to approximate the differential operator. The method is based on an ansatz for the solution which requires nothing but feedforward neural networks and an unconstrained gradient based optimization method such as gradient descent or a quasi-Newton method. We show an example where classical mesh based methods cannot be used and neural networks can be seen as an attractive alternative. Finally, we highlight the benefits of deep compared to shallow neural networks and device some other convergence enhancing techniques.
• 10.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Probability Theory.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Probability Theory. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Probability Theory.
Boundedness of single layer potentials associated to divergence form parabolic equations with complex coefficients2016In: Calculus of Variations and Partial Differential Equations, ISSN 0944-2669, E-ISSN 1432-0835, Vol. 55, no 5, article id 124Article in journal (Refereed)
We consider parabolic operators of the form $$\partial_t+\mathcal{L},\ \mathcal{L}:=-\mbox{div}\, A(X,t)\nabla,$$ in$\mathbb R_+^{n+2}:=\{(X,t)=(x,x_{n+1},t)\in \mathbb R^{n}\times \mathbb R\times \mathbb R:\ x_{n+1}>0\}$, $n\geq 1$. We assume that $A$ is an $(n+1)\times (n+1)$-dimensional matrix which is bounded, measurable, uniformly elliptic and complex, and we assume, in addition, that the entries of A are independent of the spatial coordinate $x_{n+1}$ as well as of the time coordinate $t$. We prove that the boundedness of associated single layer potentials, with data in $L^2$, can be reduced to two crucial estimates (Theorem \ref{th0}), one being a square function estimate involving the single layer potential. By establishing a local parabolic Tb-theorem for square functions we are then able to verify the two crucial estimates in the case of real, symmetric operators (Theorem \ref{th2}). Our results are crucial when addressing the solvability of the classical Dirichlet, Neumann and Regularity problems for the operator $\partial_t+\mathcal{L}$ in $\mathbb R_+^{n+2}$, with $L^2$-data on $\mathbb R^{n+1}=\partial\mathbb R_+^{n+2}$, and by way of layer potentials.
• 11. Cinti, Chiara
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
A boundary estimate for non-negative solutions to Kolmogorov operators in non-divergence form2012In: Annali di Matematica Pura ed Applicata, ISSN 0373-3114, E-ISSN 1618-1891, Vol. 191, no 1, p. 1-23Article in journal (Refereed)
We consider non-negative solutions to a class of second-order degenerate Kolmogorov operators of the form \fancyscriptL=∑i,j=1mai,j(z)∂xixj+∑i=1mai(z)∂xi+∑i,j=1Nbi,jxi∂xj−∂t, where z = (x, t) belongs to an open set Ω⊂RN×R , and 1 ≤ m ≤ N. Let z˜∈Ω , let K be a compact subset of Ω−− , and let Σ⊂∂Ω be such that K∩∂Ω⊂Σ . We give sufficient geometric conditions for the validity of the following Carleson type estimate. There exists a positive constant C K , depending only on Ω,Σ,K,z˜ and on \fancyscriptL , such that supKu≤CKu(z˜), for every non-negative solution u of \fancyscriptLu=0 in Ω such that u∣Σ=0 .
• 12. Cinti, Chiara
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
A Carleson-type estimate in Lipschitz type domains for non-negative solutions to Kolmogorov equations2013In: Annali della Scuola Normale Superiore di Pisa (Classe Scienze), Serie V, ISSN 0391-173X, E-ISSN 2036-2145, Vol. 12, no 2, p. 439-465Article in journal (Refereed)
We prove a Carleson type estimate, in Lipschitz type domains, for non-negative solutions to a class of second order degenerate differential operators of Kolmogorov type of the form L = Sigma(m)(i,j=1)a(i,j)(z)partial derivative x(i)x(j) + Sigma(m)(i=1)a(i)(z)partial derivative(xi) + Sigma(N)(i,j=1) b(i,j)x(i)partial derivative(xj) - partial derivative(t), where z = (x, t) is an element of RN+1, 1 <= m <= N. Our estimate is scale-invariant and generalizes previous results valid for second order uniformly parabolic equations to the class of operators considered.
• 13. Cinti, Chiara
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
A note on Harnack inequalities and propagation sets for a class of hypoelliptic operators2010In: Potential Analysis, ISSN 0926-2601, E-ISSN 1572-929X, Vol. 33, no 4, p. 341-354Article in journal (Refereed)
• 14.
U.Dini, Firenze, Italy.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Probability Theory. U.Dini, Firenze, Italy. Memorial University of Newfoundland, St. John's, Canada. Polytechnic Institute of New York University, New York, USA. Polytechnic Institute of New York University, New York, USA.
The Hadamard variational formula and the Minkowski problem for p-Capacity2015In: Advances in Mathematics, ISSN 0001-8708, E-ISSN 1090-2082, Vol. 285, p. 1511-1585Article in journal (Refereed)
A Hadamard variational formula for p-capacity of convex bodies in R-n is established when 1 < p < n. The formula is applied to solve the Minkowski problem for p-capacity which involves a degenerate Monge Ampere type equation. zkiniqueness for the Minkowski problem for p-capacity is established when 1 < p < n and existence and regularity when 1 < p < 2. These results are (non-linear) extensions of the now classical solution of Jerison of the Minkowski problem for electrostatic capacity (p = 2).
• 15. Frentz, Marie
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Non-divergence form parabolic equations associated with non-commuting vector fields: boundary behavior of nonnegative solutions2012In: Annali della Scuola Normale Superiore di Pisa (Classe Scienze), Serie V, ISSN 0391-173X, E-ISSN 2036-2145, Vol. 11, no 2, p. 437-474Article in journal (Refereed)
In a cylinder Omega(T) = Omega x (0, T) subset of R-+(n+1) we study the boundary behavior of nonnegative solutions of second order parabolic equations of the form H u = Sigma(m)(i,j=1) a(ij)(x, t)XiX (j)u - partial derivative(t)u = 0, (x, t) is an element of R-+(n+1), where X = {X-l, . . . , X-m} is a system of C-infinity vector fields inR(n) satisfying Hormander's rank condition (1.2), and Omega is a non-tangentially accessible domain with respect to the Carnot-Caratheodory distance d induced by X. Concerning the matrix-valued function A = {a(ij)}, we assume that it is real, symmetric and uniformly positive definite. Furthermore, we suppose that its entries a(ij) are Holder continuous with respect to the parabolic distance associated with d. Our main results are: I) a backward Harnack inequality for nonnegative solutions vanishing on the lateral boundary (Theorem 1.1); 2) the Holder continuity up to the boundary of the quotient of two nonnegative solutions which vanish continuously on a portion of the lateral boundary (Theorem 1.2); 3) the doubling property for the parabolic measure associated with the operator H (Theorem 1.3). These results generalize to the subelliptic setting of the present paper, those in Lipschitz cylinders by Fabes, Safonov and Yuan in [20, 39]. With one proviso: in those papers the authors assume that the coefficients a(ij) be only bounded and measurable, whereas we assume Holder continuity with respect to the intrinsic parabolic distance.
• 16. Frentz, Marie
The Obstacle Problem for Parabolic Non-divergence Form Operators of Hörmander type2012In: Journal of Differential Equations, ISSN 0022-0396, E-ISSN 1090-2732, Vol. 252, no 9, p. 5002-5041Article in journal (Refereed)
In this paper we establish the existence and uniqueness of strong solutions to the obstacle problem for a class of parabolic sub-elliptic operators in non-divergence form structured on a set of smooth vector fields in RnRn, X={X1,…,Xq}X={X1,…,Xq}, q⩽nq⩽n, satisfying Hörmanderʼs finite rank condition. We furthermore prove that any strong solution belongs to a suitable class of Hölder continuous functions. As part of our argument, and this is of independent interest, we prove a Sobolev type embedding theorem, as well as certain a priori interior estimates, valid in the context of Sobolev spaces defined in terms of the system of vector fields.
• 17. Frentz, Marie
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Adaptive stochastic weak approximation of degenerate parabolic equations of Kolmogorov type2010In: Journal of Computational and Applied Mathematics, ISSN 0377-0427, E-ISSN 1879-1778, Vol. 234, no 1, p. 146-164Article in journal (Refereed)
• 18. Frentz, Marie
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Optimal regularity in the obstacle problem for Kolmogorov operators related to American Asian options2010In: Mathematische Annalen, ISSN 0025-5831, E-ISSN 1432-1807, Vol. 347, no 4, p. 805-838Article in journal (Refereed)
• 19. Götmark, Elin
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Boundary behaviour of non-negative solutions to degenerate sub-elliptic equations2013In: Journal of Differential Equations, ISSN 0022-0396, E-ISSN 1090-2732, Vol. 254, no 8, p. 3431-3460Article in journal (Refereed)
Let X = {X-1, ..., X-m} be a system of C-infinity vector fields in R-n satisfying Hormander's finite rank condition and let Omega be a non-tangentially accessible domain with respect to the Carnot-Caratheodory distance d induced by X. We study the boundary behavior of non-negative solutions to the equation Lu = Sigma(i, j -1) X-i*(a(ij)X(j)u) = Sigma X-i, j=1(i)*(x)(aij(x)X-j(x)u(x)) = 0 for some constant beta >= 1 and for some non-negative and real-valued function lambda = lambda(x). Concerning kappa we assume that lambda defines an A(2)-weight with respect to the metric introduced by the system of vector fields X =, {X-1,..., X-m}. Our main results include a proof of the doubling property of the associated elliptic measure and the Holder continuity up to the boundary of quotients of non-negative solutions which vanish continuously on a portion of the boundary. Our results generalize previous results of Fabes et al. (1982, 1983) [18-20] (m = n, {X-(1), ..., X-m} = {partial derivative(x1), ...., partial derivative x(n)}, A is an A(2)-weight) and Capogna and Garofalo (1998) [6] (X = {X-1,..., X-m} satisfies Hormander's finite rank condition and X(x) equivalent to lambda A for some constant lambda). One motivation for this study is the ambition to generalize, as far as possible, the results in Lewis and Nystrom (2007, 2010, 2008) [35-38], Lewis et al. (2008) [34] concerning the boundary behavior of non-negative solutions to (Euclidean) quasi-linear equations of p-Laplace type, to non-negative solutions, to certain sub-elliptic quasi-linear equations of p-Laplace type.
• 20.
Univ Missouri, Dept Math, Columbia, MO 65211 USA..
Syracuse Univ, Math Dept, 215 Carnegie Bldg, Syracuse, NY 13244 USA.. CSIC, CSIC, Inst Ciencias Matemat, UAM,UC3M,UCM, C Nicolas Cabrera 13-15, E-28049 Madrid, Spain.. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Probability Theory.
The weak-A∞ PROPERTY OF HARMONIC AND p-HARMONIC MEASURES IMPLIES UNIFORM RECTIFIABILITY2017In: Analysis & PDE, ISSN 2157-5045, E-ISSN 1948-206X, Vol. 10, no 3, p. 513-558Article in journal (Refereed)
Let E subset of Rn+1, n >= 2, be an Ahlfors-David regular set of dimension n. We show that the weak- A 1 property of harmonic measure, for the open set Omega: =Rn+1 \ E, implies uniform rectifiability of E. More generally, we establish a similar result for the Riesz measure, p-harmonic measure, associated to the p-Laplace operator, 1 < p < infinity.
• 21.
University of Missouri, Columbia, USA.
University of Missouri, Columbia, USA. Instituto de Ciencias Matematicas, Madrid, Spain. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Probability Theory.
The weak-A property of harmonic and p-harmonic measures implies uniform rectifiability2015In: Analysis & PDE, ISSN 2157-5045, E-ISSN 1948-206XArticle in journal (Refereed)
Let $E\subset \ree$, $n\ge 2$, be an Ahlfors-David regular set of dimension $n$. We show that the weak-$A_\infty$ property of harmonic measure, for the open set$\Omega:= \ree\setminus E$, implies uniform rectifiability of $E$. More generally, we establish a similar result for the Riesz measure, $p$-harmonic measure,associated to the $p$-Laplace operator, $1<p<\infty$.
• 22. Hofmann, Steve
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Caloric measure in parabolic flat domains2004In: Duke mathematical journal, ISSN 0012-7094, E-ISSN 1547-7398, Vol. 122, p. 281-346Article in journal (Refereed)
• 23. Hofmann, Steve
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Existence of big pieces of graphs for parabolic problems2003In: Annales Academiae Scientiarum Fennicae Mathematica, ISSN 1239-629X, E-ISSN 1798-2383, Vol. 28, no 2, p. 355-384Article in journal (Refereed)
• 24. Hofmann, Steve
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Dirichlet problems for a nonstationary linearized system of Navier-Stokes equations in non-cylindrical domains2002In: Methods and Applications of Analysis, ISSN 1073-2772, E-ISSN 1945-0001, Vol. 9, p. 13-98Article in journal (Refereed)
• 25. Kukavica, Igor
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Unique continuation on the boundary for Dini domains1998In: Proceedings of the American Mathematical Society, ISSN 0002-9939, E-ISSN 1088-6826, Vol. 126, p. 441-446Article in journal (Refereed)
• 26. Kuusi, Tuomo
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
A Boundary Harnack Inequality for Singular Equations of p-Parabolic Type2014In: Proceedings of the American Mathematical Society, ISSN 0002-9939, E-ISSN 1088-6826, Vol. 142, no 8, p. 2705-2719Article in journal (Refereed)
We prove a boundary Harnack type inequality for nonnegative solutions to singular equations of p-parabolic type, 2n/(n+1)<p<2, in a time-independent cylinder whose base is C1,1-regular. Simple examples show, using the corresponding estimates valid for the heat equation as a point of reference, that this type of inequality cannot, in general, be expected to hold in the degenerate case ( 2<p<∞).
• 27. Kuusi, Tuomo
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Sharp regularity for evolutionary obstacle problems, interpolative geometries and removable sets2014In: Journal des Mathématiques Pures et Appliquées, ISSN 0021-7824, E-ISSN 1776-3371, Vol. 101, no 2, p. 119-151Article in journal (Other academic)
In this paper we prove, by showing that solutions have exactly the same degree of regularity as the obstacle, optimal regularity results for obstacle problems involving evolutionary p-Laplace type operators. A main ingredient, of independent interest, is a new intrinsic interpolative geometry allowing for optimal linearization principles via blow-up analysis at contact points. This also opens the way to the proof of a removability theorem for solutions to evolutionary p-Laplace type equations. A basic feature of the paper is that no differentiability in time is assumed on the obstacle; this is in line with the corresponding linear results.
• 28. Lewis, John L.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Boundary Harnack inequalities for operators of p-Laplace type in Reifenberg flat domains2008In: Proceedings of Symposia in Pure Mathematics, Vol. 79, p. 229-266Article in journal (Refereed)
• 29. Lewis, John L
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Boundary behavior and the Martin boundary problem for $p$ harmonic functions in Lipschitz domains2010In: Annals of Mathematics, ISSN 0003-486X, E-ISSN 1939-8980, Vol. 172, no 3, p. 1907-1948Article in journal (Refereed)
• 30. Lewis, John L.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Boundary behaviour for $p$ harmonic functions in Lipschitz and starlike Lipschitz ring domains2007In: Annales Scientifiques de l'Ecole Normale Supérieure, ISSN 0012-9593, E-ISSN 1873-2151, Vol. 40, p. 765-813Article in journal (Refereed)
• 31. Lewis, John L.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Boundary behaviour of $p$-harmonic functions in domains beyond Lipschitz domains2008In: Advances in Calculus of Variations, ISSN 1864-8258, E-ISSN 1864-8266, Vol. 1, p. 133-170Article in journal (Refereed)
• 32. Lewis, John L.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
New results for $p$ harmonic functions2011In: Pure and Applied Mathematics Quarterly, ISSN 1558-8599, E-ISSN 1558-8602, Vol. 7, p. 345-363Article in journal (Refereed)
• 33. Lewis, John L.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
On a parabolic symmetry problem2007In: Revista matemática iberoamericana, ISSN 0213-2230, E-ISSN 2235-0616, Vol. 23, p. 513-536Article in journal (Refereed)
• 34.
University of Kentucky, Lexington, KY, USA.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Probability Theory.
Quasi-linear PDEs and low-dimensional sets2018In: Journal of the European Mathematical Society (Print), ISSN 1435-9855, E-ISSN 1435-9863, Vol. 20, no 7, p. 1689-1746Article in journal (Refereed)
In this paper we establish new results concerning boundary Harnack inequalities and the Martin boundary problem, for non-negative solutions to equations of $p$-Laplace type with variable coefficients. The key novelty is that we consider solutions which vanish only on a low-dimensional set $\Sigma$ in $\mathbb R^n$ and this is different compared to the more traditional setting of boundary value problems set in the geometrical situation of a bounded domain in $\mathbb R^n$ having a boundary with (Hausdorff) dimension in the range $[n-1,n)$. We establish our quantitative and scale-invariant estimates in the context of low-dimensional Reifenberg flat sets.
• 35. Lewis, John L.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Regularity and free boundary regularity for the $p$ Laplacian in Lipschitz and $C^1$ domains2008In: Annales Academiae Scientiarum Fennicae Mathematica, ISSN 1239-629X, E-ISSN 1798-2383, Vol. 33, p. 523-548Article in journal (Refereed)
• 36. Lewis, John L.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Regularity and free boundary regularity for the p-Laplace operator in Reifenberg flat and Ahlfors regular domains2012In: Journal of The American Mathematical Society, ISSN 0894-0347, E-ISSN 1088-6834, Vol. 25, no 3, p. 827-862Article in journal (Refereed)
• 37. Lewis, John L.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Regularity of flat free boundaries in two-phase problems for the p-Laplace operator2012In: Annales de l'Institut Henri Poincare. Analyse non linéar, ISSN 0294-1449, E-ISSN 1873-1430, Vol. 29, no 1, p. 83-108Article in journal (Refereed)
In this paper we continue the study in Lewis and Nyström (2010) [19], concerning the regularity of the free boundary in a general two-phase free boundary problem for the p-Laplace operator, by proving regularity of the free boundary assuming that the free boundary is close to a Lipschitz graph.
• 38. Lewis, John L.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Regularity of Lipschitz free boundaries in two-phase problems for the $p$-Laplace operator2010In: Advances in Mathematics, ISSN 0001-8708, E-ISSN 1090-2082, Vol. 225, no 5, p. 2565-2597Article in journal (Refereed)
• 39. Lewis, John L.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
The boundary Harnack inequality for infinity harmonic functions in the plane2008In: Proceedings of the American Mathematical Society, ISSN 0002-9939, E-ISSN 1088-6826, Vol. 136, p. 1311-1323Article in journal (Refereed)
• 40. Lewis, John L
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
p-Harmonic Measure in Simply Connected Domains2011In: Annales de l'Institut Fourier, ISSN 0373-0956, E-ISSN 1777-5310, Vol. 61, no 2, p. 689-715Article in journal (Refereed)
• 41. Lewis, John L
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
On the dimension of p-harmonic measure in space2013In: Journal of the European Mathematical Society (Print), ISSN 1435-9855, E-ISSN 1435-9863, Vol. 15, no 6, p. 2197-2256Article in journal (Refereed)
Let Omega subset of R-n, n >= 3, and let p, 1 < p < infinity, p not equal D 2, be given. In this paper we study the dimension of p-harmonic measures that arise from nonnegative solutions to the p-Laplace equation, vanishing on a portion of partial derivative Omega, in the setting of delta-Reifenberg flat domains. We prove, for p >= n, that there exists (delta) over tilde = (delta) over tilde (p, n) > 0 small such that if Omega is a delta-Reifenberg flat domain with delta < <(delta)over tilde>, then p-harmonic measure is concentrated on a set of sigma-finite Hn-1-measure. We prove, for p >= n, that for sufficiently flat Wolff snowflakes the Hausdorff dimension of p-harmonic measure is always less than n - 1. We also prove that if 2 < p < n, then there exist Wolff snowflakes such that the Hausdorff dimension of p-harmonic measure is less than n - 1, while if 1 < p < 2, then there exist Wolff snowflakes such that the Hausdorff dimension of p-harmonic measure is larger than n - 1. Furthermore, perturbing off the case p = 2; we derive estimates for the Hausdorff dimension of p-harmonic measure when p is near 2.
• 42.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Applied Mathematics and Statistics.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Applied Mathematics and Statistics.
Optimal switching problems under partial information2015In: Monte Carlo Methods and Applications, ISSN 1569-3961, Vol. 21, no 2, p. 91-120Article in journal (Refereed)
In this paper we formulate and study an optimal switching problem under partial information. In our model the agent/manager/investor attempts to maximize the expected reward by switching between different states/investments. However, he is not fully aware of his environment and only an observation process, which contains partial information about the environment/underlying, is accessible. It is based on the partial information carried by this observation process that all decisions must be made. We propose a probabilistic numerical algorithm based on dynamic programming, regression Monte Carlo methods, and stochastic filtering theory to compute the value function. In this paper, the approximation of the value function and the corresponding convergence result are obtained when the underlying and observation processes satisfy the linear Kalman-Bucy setting. A numerical example is included to show some specifc features of partial information.
• 43. Lundström, Niklas L. P.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
On a two-phase free boundary condition for $p$-harmonic measures2009In: Manuscripta mathematica, ISSN 0025-2611, E-ISSN 1432-1785, Vol. 129, p. 231-249Article in journal (Refereed)
• 44. Lundström, Niklas L. P.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
The boundary Harnack inequality for solutions to equations of Aronsson type in the plane2011In: Annales Academiae Scientiarum Fennicae Mathematica, ISSN 1239-629X, E-ISSN 1798-2383, Vol. 36, p. 261-278Article in journal (Refereed)
• 45.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Systems of variational inequalities for non-local operators related to optimal switching problems: existence and uniqueness2014In: Manuscripta mathematica, ISSN 0025-2611, E-ISSN 1432-1785, Vol. 145, no 3-4, p. 407-432Article in journal (Refereed)
In this paper we study viscosity solutions to the system \begin{eqnarray*}&&\min\biggl\{-\mathcal{H}u_i(x,t)-\psi_i(x,t),u_i(x,t)-\max_{j\neq i}(-c_{i,j}(x,t)+u_j(x,t))\biggr\}=0,\notag\\&&u_i(x,T)=g_i(x),\ i\in\{1,\dots,d\},\end{eqnarray*}where $(x,t)\in\mathbb R^{N}\times [0,T]$. Concerning $\mathcal{H}$ we assume that $\mathcal{H}=\mathcal{L}+\mathcal{I}$ where$\mathcal{L}$ is a linear, possibly degenerate, parabolic operator of second order and $\mathcal{I}$ is a non-local integro-partial differential operator. A special case of this type of system of variational inequalities with terminal data occurs in the context of optimal switching problems when thedynamics of the underlying state variables is described by $N$-dimensional Levy processes. We establish a general comparison principle for viscosity sub- and supersolutions to the system under mild regularity, growth andstructural assumptions on the data, i.e., on the operator $\mathcal{H}$ and on continuous functions $\psi_i$, $c_{i,j}$, and$g_i$. Using the comparison principle we establish the existence of a unique viscosity solution $(u_1,\dots,u_d)$ to the system by using Perron's method. Our contribution, compared to the existing literature, is that we establish existence and uniqueness of viscosity solutions in the setting of Levy processes and non-local operators with no sign assumption on the switching costs $\{c_{i,j}\}$ and allowing $c_{i,j}$ to depend on $x$ as well as $t$.
• 46.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Systems of variational inequalities in the context of optimal switching problems and operators of Kolmogorov type2014In: Annali di Matematica Pura ed Applicata, ISSN 0373-3114, E-ISSN 1618-1891, Vol. 193, no 4, p. 1213-1247Article in journal (Refereed)
• 47.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
A backward in time Harnack inequality for non-negative solutions to fully non-linear parabolic equations2014In: Revista di Matematica della Universita Di Parma, ISSN 0035-6298, Vol. 5, no 1, p. 1-14Article in journal (Refereed)
We consider fully non-linear parabolic equations of the form$Hu =F(D^2u(x,t),Du(x,t),x,t)-\partial_tu = 0$in bounded space-time domains $D\subset\mathbb R^{n+1}$, assuming only $F(0,0,x,t)=0$ and a uniform parbolicity condition on $F$. For domains of the form $\Omega_T=\Omega\times (0,T)$, where $\Omega\subset\mathbb R^n$ is a bounded Lipschitz and $T>0$, we establish a scale-invariant backward in time Harnackinequality for non-negative solutions vanishing on the lateral boundary. Our argument rests on the comparison principle, the Harnack inequality and local H{\"o}lder continuity estimates.
• 48.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
An estimate of a polynomial capacity1998In: Potential Analysis, ISSN 0926-2601, E-ISSN 1572-929X, Vol. 9, no 3, p. 217-227Article in journal (Refereed)
• 49.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Boundary value problems and duality between $L^p$ Dirichlet and regularity problems for second order parabolic systems in non-cylindrical domains2006In: Collectanea Mathematica (Universitat de Barcelona), ISSN 0010-0757, E-ISSN 2038-4815, Vol. 57, p. 93-119Article in journal (Refereed)
• 50.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics.
Boundary value problems for parabolic Lamé systems in time-varying domains1999In: Indiana University Mathematics Journal, ISSN 0022-2518, E-ISSN 1943-5258, Vol. 48, p. 1285-1355Article in journal (Refereed)
12 1 - 50 of 87
Cite
Citation style
• apa
• ieee
• modern-language-association
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
|
2019-01-21 13:30:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7090281248092651, "perplexity": 1621.5390226670688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583792784.64/warc/CC-MAIN-20190121131658-20190121153658-00018.warc.gz"}
|
https://brilliant.org/problems/an-algebra-problem-by-hummus-a/
|
# An algebra problem by Hummus a
Algebra Level 4
$$\left\lfloor { x }^{ 2 }+\frac { 1 }{ { x }^{ 3 } } \right\rfloor =\left\lfloor { x }^{ 3 }+1 \right\rfloor$$
How many solutions does the above equation have?
×
|
2017-01-24 11:26:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8227568864822388, "perplexity": 9230.692414549983}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00388-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://www.investopedia.com/terms/a/atr.asp
|
### What is Average True Range - ATR?
The average true range (ATR) is a technical analysis indicator that measures market volatility by decomposing the entire range of an asset price for that period. Specifically, ATR is a measure of volatility introduced by market technician J. Welles Wilder Jr. in his book, "New Concepts in Technical Trading Systems."
The true range indicator is taken as the greatest of the following: current high less the current low; the absolute value of the current high less the previous close; and the absolute value of the current low less the previous close. The average true range is then a moving average, generally using 14 days, of the true ranges.
• Average true range (ATR) is a technical indicator measuring market volatility.
• It is typically derived from the 14-day moving average of a series of true range indicators.
• It was originally developed for use in commodities markets but has since been applied to all types of securities.
3:10
### The Formula For ATR Is
The first step in calculating ATR is to find a series of true range values for a security. The price range of an asset for a given trading day is simply its high minus its low. Meanwhile, the true range is more encompassing and is defined as:
\begin{aligned} &TR = \text{Max}[(H\ -\ L), \text{Abs}(H\ -\ C_P),\text{Abs}(L\ -\ C_P)]\\ &ATR=\bigg(\frac1n\bigg)\sum\limits^{(n)}_{(i=1)}TR_i\\ &\textbf{where:}\\ &TR_i=\text{A particular true range}\\ &n=\text{The time period employed} \end{aligned}
### How To Calculate ATR
Traders can use shorter periods than 14 days to generate more trading signals, while longer periods have a higher probability to generate less trading signals. For example, assume a short-term trader only wishes to analyze the volatility of a stock over a period of five trading days. Therefore, the trader could calculate the five-day ATR. Assuming the historical price data is arranged in reverse chronological order, the trader finds the maximum of the absolute value of the current high minus the current low, absolute value of the current high minus the previous close and the absolute value of the current low minus the previous close. These calculations of the true range are done for the five most recent trading days and are then averaged to calculate the first value of the five-day ATR.
### What Does Average True Range Tell You?
Wilder originally developed the average true range (ATR) for commodities, but the indicator can also be used for stocks and indices. Simply put, a stock experiencing a high level of volatility has a higher ATR, and a low volatility stock has a lower ATR. The ATR may be used by market technicians to enter and exit trades, and it is a useful tool to add to a trading system. It was created to allow traders to more accurately measure the daily volatility of an asset by using simple calculations. The indicator does not indicate the price direction; rather it is used primarily to measure volatility caused by gaps and limit up or down moves. The ATR is fairly simple to calculate and only needs historical price data.
The use of the ATR is commonly used as an exit method that can be applied no matter how the entry decision is made. One popular technique is known as the "chandelier exit" and was developed by Chuck LeBeau. The chandelier exit places a trailing stop under the highest high the stock reached since you entered the trade. The distance between the highest high and the stop level is defined as some multiple times the ATR. For example, we can subtract three times the value of the ATR from the highest high since we entered the trade.
Average true range can also give a trader an indication of what size trade to put on in derivatives markets. It is possible to use the ATR approach to position sizing that accounts for an individual trader's own willingness to accept risk as well as the volatility of the underlying market. (For a detailed example on how to use ATR for this purpose, read our article, Sizing A Futures Trade Using Average True Range.)
### Example Of How To Use ATR
As a hypothetical example, assume the first value of the five-day ATR is calculated at 1.41 and the sixth day has a true range of 1.09. The sequential ATR value could be estimated by multiplying the previous value of the ATR by the number of days less one, and then adding the true range for the current period to the product. Next, divide the sum by the selected timeframe. For example, the second value of the ATR is estimated to be 1.35, or (1.41 * (5 - 1) + (1.09)) / 5. The formula could then be repeated over the entire time period.
### Limitations Of ATR
There are two main limitations to using the average true range indicator. The first is that ATR is a subjective measure - meaning that it is open to interpretation. There is no single ATR value that will tell you with any certainty that a trend is about to reverse or not. Instead, ATR readings should always be compared against earlier readings to get a feel of a trend's strength or weakness.
Second, ATR also only measures volatility and not the direction of an asset's price. This can sometimes result in mixed signals, particularly when markets are experiencing pivots or when trends are at turning points. For instance, a sudden increase in the ATR following a large move counter to the prevailing trend may lead some traders think the ATR is confirming the old trend; however, this may not actually be the case.
|
2019-07-17 03:03:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5630701780319214, "perplexity": 1079.8158934249946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525009.36/warc/CC-MAIN-20190717021428-20190717043428-00321.warc.gz"}
|
https://www.physicsforums.com/threads/postage-stamp-sells-for-9-5-million.758515/
|
# News Postage stamp sells for $9.5 million Tags: 1. Jun 18, 2014 ### jtbell ### Staff: Mentor Yesterday, the British Guiana one-cent magenta postage stamp of 1856 sold at auction for$9.5 million:
http://money.cnn.com/2014/06/18/luxury/stamp-auction/index.html?hpt=hp_t2
http://en.wikipedia.org/wiki/British_Guiana_1c_magenta
It thereby reclaimed its position as the world’s most valuable stamp from the Sweden three-skilling yellow of 1855:
http://en.wikipedia.org/wiki/Treskilling_Yellow
Its most recent previous owner was John E. du Pont, who bought it in 1980 for $935,000. He was convicted of murder in 1997, and died in prison in 2010. His estate was the seller in yesterday’s sale. I’ve seen it on probably two occasions: the big international stamp shows in Philadelphia in 1976, and in Chicago in 1986. During the shows in San Francisco in 1997 and Washington DC in 2006, du Pont was in prison; maybe the new owner will show it in New York in 2016. In Philadelphia, it was displayed by itself in a “court of honor” along with several other famous philatelic items; but in Chicago it was among the “normal” exhibits as part of du Pont’s comprehensive collection of British Guiana, with a guard standing nearby: Thanks to Photoshop, I can pull out some details from the apparently featureless blob in the scanned slide above: #### Attached Files: • ###### bg1.jpg File size: 40.6 KB Views: 129 • ###### bg2.jpg File size: 60.8 KB Views: 134 • ###### bg3.jpg File size: 8.7 KB Views: 133 2. Jun 18, 2014 ### adjacent :surprised What's so special about it? Why would someone buy an old stamp for 9.5 million? 3. Jun 18, 2014 ### Greg Bernhardt ### Staff: Admin You have got to be kidding me. Paying$9.5M for that is a crime against humanity.
4. Jun 18, 2014
### D H
Staff Emeritus
The only thing that is special about it is that there's only one of them. It is unique.
One of the driving factors in the value of a stamp is how many others just like it are out there in stamp collections. The value of a stamp escalates exponentially with rarity. The greatest rarity is one and only one. For this stamp, along with the Treskilling Yellow mentioned by JT, there is only one.
5. Jun 18, 2014
### Staff: Mentor
The "collecting disease" does strange things to people. :uhh: I have a mild case of it myself, so I can see where du Pont and Philipp von Ferrary and Bill Gross and others like them come from. (If any of you all invest your savings in one of the PIMCO bond funds, you're helping Bill with his stamp collection.)
There aren't many truly unique stamps. I think the British Guiana stamp takes the prize because it's from a British colony, it started to issue stamps early on, and the British were among the first to take up stamp collecting seriously in the 1840s and 1850s. Think of those eccentric rich Britons of the Victorian era.
British Guiana is small, and most of its early stamps are scarce to begin with. They're among the "blue chips" of philately, so to speak. A collection of British Guiana is manageable in size (number of stamps), but nevertheless expensive to do well, even without the one-cent magenta. With the one-cent magenta, only one person at a time can have a complete collection. It doesn't take many millionaires who collect British Guiana to drive the price up.
|
2017-10-22 20:24:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18385006487369537, "perplexity": 3938.4788873532607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825436.78/warc/CC-MAIN-20171022184824-20171022204824-00638.warc.gz"}
|
http://math.stackexchange.com/questions/287574/proving-continuity-using-the-topological-definition
|
# Proving continuity using the topological definition?
Let $f$ be a function from $\mathbb{R}$ to $\mathbb{R}$, and suppose that $f(x) = 0$, $\forall x \in \mathbb{R}$, except when $x=c$, for a fixed $c \in \mathbb{R}$. Now, $f$ is clearly discontinuous at $c$, and I can easily show this using the $\delta$-$\epsilon$ definition, but how do I show discontinuity using the definition of continuity that the preimage of every open set is open? I just don't see any nontrivial open sets in the image of the function, as $f(\mathbb{R}) = \{0,f(c)\}$.
Any ideas?
EDIT: Note that $f(c) \neq 0$.
-
You know that $f(c)\neq 0$. So take some open neighborhood $U$ such that $f(c)\in U$ but $0\notin U$. So $f^{-1}(U)=\{c\}$. However, singletons are not open in the usual topology.
To see this, note that $c\in\{c\}$. But $\{c\}$ does not contain any open interval about $c$, hence it cannot possibly be open, since any open set contains an open neighborhood of any of its points.
You can think of it as a subset of the image endowed with the subset topology derived from $\mathbb{R}$. – Alfonso Fernandez Jan 26 '13 at 21:02
As I understand it, we're taking $$f(x)=\begin{cases}0 &: x\in\Bbb R\setminus\{c\}\\ d &:\ x=c\end{cases}.$$Assuming $d\neq0$, then there is an open interval around $d$, say $U$, so that $U\cap\{0\}=\varnothing$. This implies that $\{d\}$ is open in the image of $f$, but $f^{-1}(d)=\{c\}$ is not open in $\Bbb R$, the domain.
|
2015-05-25 03:46:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.989258885383606, "perplexity": 66.88349960466661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928350.51/warc/CC-MAIN-20150521113208-00303-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://conference.ippp.dur.ac.uk/event/470/contributions/2562/
|
The 34th International Symposium on Lattice Field Theory (Lattice 2016)
24-30 July 2016
Highfield Campus, University of Southampton
Europe/London timezone
Lattice QCD calculation of form factors for $\Lambda_b \to \Lambda(1520) \ell^+ \ell^-$ decays
29 Jul 2016, 16:50
20m
Building 67 Room 1003 (Highfield Campus, University of Southampton)
Building 67 Room 1003
Highfield Campus, University of Southampton
Highfield Campus, Southampton SO17 1BJ, UK
Talk Weak Decays and Matrix Elements
Speaker
Prof. Stefan Meinel (University of Arizona / RIKEN BNL Research Center)
Description
Experimental results for mesonic $b \to s \mu^+ \mu^-$ decays show a pattern of deviations from Standard-Model predictions. These deviations could be due to new fundamental physics or due to an insufficient understanding of hadronic effects. Complementary information on the $b \to s \mu^+ \mu^-$ transition can be obtained from $\Lambda_b$ decays. This was recently done using the process $\Lambda_b \to \Lambda \mu^+ \mu^-$, where the $\Lambda$ is the lightest strange baryon. A further interesting channel that is being analyzed by LHCb is $\Lambda_b \to p^+ K^- \mu^+ \mu^-$, where the $p^+ K^-$ final state receives contributions from multiple higher-mass $\Lambda$ resonances. The narrowest and most prominent of these is the $\Lambda(1520)$, which has $J^P=\frac32^-$. We discuss our progress toward a lattice QCD calculation of the relevant $\Lambda_b \to \Lambda(1520)$ form factors.
Primary authors
Gumaro Rendon (University of Arizona) Prof. Stefan Meinel (University of Arizona / RIKEN BNL Research Center)
Presentation Materials
Slides
Your browser is out of date!
Update your browser to view this website correctly. Update my browser now
×
|
2019-09-16 12:12:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5065555572509766, "perplexity": 5073.941401040731}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572556.54/warc/CC-MAIN-20190916120037-20190916142037-00142.warc.gz"}
|
https://www.biogeosciences.net/15/4405/2018/
|
Journal cover Journal topic
Biogeosciences An interactive open-access journal of the European Geosciences Union
Journal topic
Biogeosciences, 15, 4405-4429, 2018
https://doi.org/10.5194/bg-15-4405-2018
Biogeosciences, 15, 4405-4429, 2018
https://doi.org/10.5194/bg-15-4405-2018
Research article 19 Jul 2018
Research article | 19 Jul 2018
# Modeling seasonal and vertical habitats of planktonic foraminifera on a global scale
Seasonal and vertical distribution of planktonic foraminifera
Kerstin Kretschmer, Lukas Jonkers, Michal Kucera, and Michael Schulz Kerstin Kretschmer et al.
• MARUM – Center for Marine Environmental Sciences and Faculty of Geosciences, University of Bremen, Bremen, Germany
Abstract
Species of planktonic foraminifera exhibit specific seasonal production patterns and different preferred vertical habitats. The seasonal and vertical habitats are not constant throughout the range of the species and changes therein must be considered when interpreting paleoceanographic reconstructions based on fossil foraminifera. However, detecting the effect of changing vertical and seasonal habitat on foraminifera proxies requires independent evidence for either habitat or climate change. In practice, this renders accounting for habitat tracking from fossil evidence almost impossible. An alternative method that could reduce the bias in paleoceanographic reconstructions is to predict species-specific habitat shifts under climate change using an ecosystem modeling approach. To this end, we present a new version of a planktonic foraminifera model, PLAFOM2.0, embedded into the ocean component of the Community Earth System Model version 1.2.2. This model predicts monthly global concentrations of the planktonic foraminiferal species Neogloboquadrina pachyderma, N. incompta, Globigerina bulloides, Globigerinoides ruber (white), and Trilobatus sacculifer throughout the world ocean, resolved in 24 vertical layers to 250 m of depth. The resolution along the vertical dimension has been implemented by applying the previously used spatial parameterization of carbon biomass as a function of temperature, light, nutrition, and competition on depth-resolved parameter fields. This approach alone results in the emergence of species-specific vertical habitats, which are spatially and temporally variable. Although an explicit parameterization of the vertical dimension has not been carried out, the seasonal and vertical distribution patterns predicted by the model are in good agreement with sediment trap data and plankton tow observations. In the simulation, the colder-water species N. pachyderma, N. incompta, and G. bulloides show a pronounced seasonal cycle in their depth habitat in the polar and subpolar regions, which appears to be controlled by food availability. During the warm season, these species preferably occur in the subsurface (below 50 m of water depth), while towards the cold season they ascend through the water column and are found closer to the sea surface. The warm-water species G. ruber (white) and T. sacculifer exhibit a less variable shallow depth habitat with highest carbon biomass concentrations within the top 40 m of the water column. Nevertheless, even these species show vertical habitat variability and their seasonal occurrence outside the tropics is limited to the warm surface layer that develops at the end of the warm season. The emergence in PLAFOM2.0 of species-specific vertical habitats, which are consistent with observations, indicates that the population dynamics of planktonic foraminifera species may be driven by the same factors in time, space, and with depth, in which case the model can provide a reliable and robust tool to aid the interpretation of proxy records.
1 Introduction
Planktonic foraminifera are found throughout the open ocean, where they inhabit roughly the top 500 m of the water column . Their calcareous shells, preserved in ocean sediments, are widely used to reconstruct past climate conditions. To do so, information about their habitat including their horizontal and vertical distribution is needed. It is known from observational data that the prevailing environmental conditions, such as temperature, stratification, light intensity, and food availability, affect the growth and distribution of the individual planktonic foraminifera . Based on stratified plankton tow and sediment trap data the seasonal succession of planktonic foraminifera species has been assessed on a local or regional scale (e.g., Fairbanks and Wiebe1980; Kohfeld et al.1996; Wilke et al.2009; Jonkers et al.2013; Jonkers and Kučera2015), whereas for a broader regional or global perspective, modeling approaches have been used to study the seasonal variations in the surface (mixed) layer of the ocean . Comparatively less is known about the depth habitat of planktonic foraminifera species and how it varies seasonally. Although previous studies identified different environmental and ontogenetic factors (i.a., temperature, chlorophyll a concentration, the lunar cycle, and/or the structure of the water column), which influence the species-specific depth habitats including their mean living depth and vertical migration (e.g., Fairbanks and Wiebe1980; Fairbanks et al.1982; Schiebel et al.2001; Simstich et al.2003; Field2004; Salmon et al.2015; Rebotim et al.2017), the only attempt to model the vertical habitat is by .
It is well known that species-specific habitats vary seasonally and spatially depending on the prevailing climatic conditions . Yet, despite this evidence for a variable habitat, it is often assumed in paleoceanographic studies that the habitat of planktonic foraminifera is constant, i.e., that it does not change in time and space, potentially leading to erroneous estimates of past climate conditions. recently highlighted how foraminifera proxies are affected by habitat tracking and showed that by not accounting for this behavior, spatial and temporal trends in proxy records may be underestimated. Given the habitat variability of planktonic foraminifera, it is more than likely that a climate-dependent offset from mean annual sea surface conditions results not only from seasonal but also from depth habitat variability due to changes in ambient conditions. Such vertical habitat variability was shown by , who investigated parameters controlling the depth habitat of planktonic foraminifera in the subtropical eastern North Atlantic. In line with studies from other regions of the world ocean (e.g., Fairbanks et al.1982; Bijma et al.1990a; Ortiz et al.1995; Schiebel et al.2001; Field2004; Salmon et al.2015), identified distinct species-specific depth habitats, but they also showed that the habitats vary on lunar and seasonal timescales and in response to temperature, chlorophyll a, and other environmental factors. Evidence for variable depth habitats at least on a regional scale has emerged from studies in other regions .
These observations underline the necessity to consider species-specific habitats and their variability on a global scale to increase the reliability of paleoceanographic reconstructions. However, a global assessment of species-specific depth habitat variability in time and space and the potential underlying control mechanisms is lacking. Since the observational data coverage of the global ocean is too sparse to provide in this regard a broad general estimate, we apply an ecosystem modeling approach to predict the vertical and seasonal distribution of planktonic foraminifera on a global scale.
2 Methods
## 2.1 Approach
To predict the seasonally varying global species-specific depth habitat of planktonic foraminifera, we modified the previously developed planktonic foraminifera model PLAFOM , which is implemented as an off-line module into the ocean component of the Community Earth System Model version 1.2.2 (CESM1.2; Hurrell et al.2013), with active ocean biogeochemistry (which is denoted as the CESM1.2(BGC) configuration). This model system simulates the monthly concentrations of five modern planktonic foraminiferal species, which are widely used in paleoceanographic reconstructions. The original approach of and aimed to predict the distribution of planktonic foraminifera in the surface mixed layer on geological timescales. This model version has been successfully used to assess the effect of changing environmental conditions on species distributional patterns in time and space and to aid in interpreting paleoceanographic records regarding seasonal production shifts in the geological past , but could not provide any information about depth. To implement the vertical dimension, we used an approach in which we first updated PLAFOM (hereafter referred to as PLAFOM2.0) by including light dependency for symbiont-bearing planktonic foraminifera and then applied the previously used spatial parameterization of carbon biomass as a function of temperature, nutrition, and competition, together with light, on depth-resolved parameter fields. By combining PLAFOM2.0 with the CESM1.2(BGC) configuration (hereafter referred to as the CESM1.2(BGC+PLA) configuration), the vertical dimension can be resolved throughout the ocean, with 24 layers in the top 250 m. Thus, PLAFOM2.0, belonging to a suite of proxy system models (e.g., Pollard and Schulz1994; Schmidt1999; Fraile et al.2008; Evans et al.2013; Dee et al.2015; Völpel et al.2017), will aid the interpretation of paleoclimate reconstructions. In addition, PLAFOM2.0 has the potential to be used in a paleoclimate data assimilation framework (see, e.g., Goosse et al.2010; Steiger et al.2014; Dee et al.2016; Hakim et al.2016).
## 2.2 CESM1.2(BGC) configuration
We used the CESM1.2(BGC) configuration as a code base. This configuration includes the Biogeochemical Elemental Cycling (BEC) model , which is based on the upper ocean ecosystem model of coupled to a biogeochemistry model based on the Ocean Carbon Model Intercomparison Project (OCMIP; Doney et al.2006). The BEC model includes various potentially growth-limiting nutrients (nitrate, ammonium, phosphate, dissolved iron, and silicate), three explicit phytoplankton functional types (diatoms, diazotrophs, pico- and nano-phytoplankton), a partial calcifier class (representing coccolithophores), a single adaptive zooplankton class, dissolved organic matter, sinking particulate detritus, and full carbonate system thermodynamics . Phytoplankton growth rates are controlled by temperature, light, and available nutrients . The single zooplankton pool grazes on all phytoplankton types, whereby the routing of grazed material varies depending on the type of prey . For further details, we refer to .
The BEC model has been embedded into the ocean component of CESM version 1.2.2. CESM1.2 is a fully coupled climate model consisting of several components including the atmosphere, ocean, land, and sea ice , whereby the geophysical fluxes among the components are exchanged through a central coupler . Here we performed an ocean-ice-only simulation with active ocean biogeochemistry, whereby the ocean model is coupled to both the sea ice model and data models for the atmosphere, land, and river routing, which provide the required input data for the simulation.
The CESM1.2 ocean component is the Parallel Ocean Program version 2 (POP2; Smith et al.2010; Danabasoglu et al.2012), with a zonal resolution of 1 and an increased meridional resolution of 0.27 near the Equator. POP2 employs a nonuniform dipolar grid with the North Pole being displaced into Greenland. With a total number of 60 vertical levels, the grid spacing is fine near the surface (10 levels in the top 100 m) and increases with depth to 250 m near the bottom. The sea ice component of CESM1.2 is the Community Ice Code version 4 (CICE4; ; ), which uses the same horizontal grid as the ocean model.
## 2.3 PLAFOM2.0
This new model version, PLAFOM2.0, considers the polar species Neogloboquadrina pachyderma, which is supplemented by the subpolar species N. incompta (sensu Darling et al.2006) and Globigerina bulloides as well as by the warm-water algal symbiont-bearing species Globigerinoides ruber (white) and Trilobatus sacculifer (sensu Spezzaferri et al.2015). Those species have been chosen as they can be considered to represent a large portion of the planktonic foraminiferal biomass in the surface ocean (for further details see Kretschmer et al.2016). The different planktonic foraminifera species were added to the ocean component of CESM1.2 as optional passive tracers with the requirement that the BEC model is active.
PLAFOM2.0 is driven by temperature, the available food sources (including zooplankton, diatoms, small phytoplankton, and organic detritus), and also light availability, whereby the latter only matters with regard to the growth of the two algal symbiont-bearing species and G. bulloides, which according to the latest findings hosts the picocyanobacterium Synechococcus as a photosynthesizing endobiont . Synechococcus is known to be important for cyanobacterial photosynthesis in marine and freshwater ecosystems .
Table 1Model parameters and their modifications relative to and/or . The original value is given in parentheses.
PF, 0 – maximum foraminiferal growth rate (in day−1) at 30C (derived from the maximum zooplankton growth rate at 20C given by ). αPI – initial slope of the photosynthesis–irradiance (PI) curve (in m2 W−1day−1) (derived from the PI curve of Synechococcus given in for G. bulloides and of endosymbiotic dinoflagellates given in for T. sacculifer). p% – fraction of photosynthesis contributing to foraminiferal growth rate. Tthres – minimum (for N. pachyderma) or maximum (for all other species) threshold temperature at which foraminiferal species can thrive (in C). clij – competition pressure of species i upon species j.
The food preferences and temperature tolerance limits for each species have been derived from sediment trap data and culturing experiments (see Fraile et al.2008, for details). Changes in the foraminifera carbon concentration for each species are determined as follows:
$\begin{array}{}\text{(1)}& \frac{\mathrm{d}F}{\mathrm{d}t}=\left(\mathrm{GGE}\cdot \mathrm{TG}\right)-\mathrm{ML},\end{array}$
where F is the foraminifera carbon concentration (in mmol C m−3), GGE (gross growth efficiency) is the portion of grazed matter that is incorporated into foraminiferal carbon biomass, TG represents total grazing (i.e., the growth rate in mmol C m−3 s−1), and ML denotes mass loss (i.e., the mortality rate in mmol C m−3 s−1). To properly simulate the vertical distribution of each considered planktonic foraminifera, we included light dependency and modified parts of the parameterizations of the foraminiferal species concentration. Therefore, we extended the growth rate equation by not only considering food availability and temperature sensitivity, but also light intensity to define growth. Additionally, we adjusted parts of the mortality rate equation to improve the model accuracy. In the following, the performed modifications are described in detail in regard to growth and mortality rates. The modifications compared to the earlier model version are summarized in Table 1.
### 2.3.1 Growth rate
The growth rate depends on the available food and temperature sensitivity of each foraminiferal species as well as on light for the species with algal symbionts and/or cyanobacterial endobionts. To account for the light dependence with depth influencing the growth of G. bulloides and of the spinose species G. ruber (white) and T. sacculifer, we included a photosynthetic growth rate. As a first-order estimate, we applied a similar approach as and , who determined phytoplankton growth rates based on available light and nutrient conditions, which have been accordingly used in the BEC model . We are aware that a phytoplankton response to light is not directly transferable to planktonic foraminifera, but we argue that as a first approximation this is a valid approach.
Photosynthesis depends on light availability and temperature. This codependency can be expressed as follows:
${P}_{\mathrm{F},\phantom{\rule{0.125em}{0ex}}\mathrm{photo}}={P}_{\mathrm{F},\phantom{\rule{0.125em}{0ex}}\mathrm{max}}\cdot \left[\mathrm{1}-\mathrm{exp}\left(\frac{-{\mathit{\alpha }}_{\mathrm{PI}}\cdot {I}_{\mathrm{PAR}}}{{P}_{\mathrm{F},\phantom{\rule{0.125em}{0ex}}\mathrm{max}}}\right)\right],$
where PF, photo is the foraminiferal specific rate of photosynthesis (in s−1) and PF, max is the maximum value of PF, photo at temperature T (in s−1) calculated as
${P}_{\mathrm{F},\phantom{\rule{0.125em}{0ex}}\mathrm{max}}={P}_{\mathrm{F},\phantom{\rule{0.125em}{0ex}}\mathrm{0}}\cdot {T}_{\mathrm{func}};$
αPI is the initial slope of the photosynthesis–irradiance curve (in m2 W${}^{-\mathrm{1}}{s}^{-\mathrm{1}}$) (Table 1), IPAR is the average irradiance over the mixed layer depth provided by the ecosystem model (in W m−2), PF, 0 represents the maximum foraminiferal growth rate at a specific temperature T0 (in s−1) (Table 1), and Tfunc is the temperature response function (dimensionless). The temperature function is defined as
$\begin{array}{}\text{(2)}& {\mathrm{T}}_{\mathrm{func}}={q}_{\mathrm{10}}^{\frac{T-{T}_{\mathrm{0}}}{\mathrm{10}}},\end{array}$
with a q10 value of 1.5 , T being the ambient ocean temperature (in K), and T0 the reference temperature of 303.15 K.
The photosynthetic growth rate, PF (in mmol C m−3 s−1), can finally be determined as follows:
${P}_{\mathrm{F}}={P}_{\mathrm{F},\phantom{\rule{0.125em}{0ex}}\mathrm{photo}}\cdot F\cdot {p}_{\phantom{\rule{0.125em}{0ex}}\mathit{%}},$
where p% represents the fraction of photosynthesis contributing to growth (see Table 1).
### 2.3.2 Mortality rate
The mortality rate is determined by respiration loss, predation by higher trophic levels, and competition among species. To improve the seasonal patterns in the foraminiferal carbon biomass for low temperatures, we followed and adjusted the temperature dependence of the predation term (MLpred in mmol C m−3 s−1):
${\mathrm{ML}}_{\mathrm{pred}}={f}_{\mathrm{mort}\mathrm{2}}\cdot {\mathrm{T}}_{\mathrm{func}}\cdot {F}_{\mathrm{p}}^{\mathrm{2}},$
where fmort2 represents the quadratic mortality rate (in s−1(mmol C m−3)−1), Tfunc is the temperature response function (dimensionless) used for scaling, and Fp (in mmol C m−3) is used to limit the planktonic foraminifera mortality at very low carbon biomass levels. Compared to , here predation is scaled by Eq. (2), a temperature function using a q10 value of 1.5 .
Additionally, we included a stronger competitive behavior of G. bulloides by adjusting the free parameters in the competition term. In PLAFOM2.0, competition (MLcomp in mmol C m−3 s−1) is defined as follows:
${\mathrm{ML}}_{\mathrm{comp}}=\sum _{i}\left[{F}_{\mathrm{p}}\cdot \frac{c{l}_{ij}\cdot {F}_{i}\cdot d}{{F}_{i}\cdot d+\mathrm{0.1}}\right],$
with Fi being the concentration of the foraminiferal species exerting competition, clij the maximum competition pressure of species i upon species j, and d the constant controlling the steepness of the Michaelis–Menten relationship for competition. In comparison with , we only modified the parameter clij for N. incompta, G. bulloides, and G. ruber (white) (Table 1).
We added the present implementation of PLAFOM2.0 to the code trunk of POP2 as a separate module. Additionally, the food sources for the planktonic foraminifera species are computed in the ecosystem model and instantly passed to PLAFOM2.0 to calculate the foraminifera carbon concentration. A parameter sensitivity assessment for PLAFOM was carried out by and since PLAFOM2.0 is based on the same underlying formulation, we consider an extensive new sensitivity assessment not essential at this stage. For a more detailed description of the planktonic foraminifera model and its behavior on a regional or global scale in the surface mixed layer, we refer to and .
## 2.4 Model simulation
To test the model, we performed a preindustrial control experiment. Therefore, we derived the initial ocean and sea ice states from an ocean-ice-only simulation, which did not include the BEC ocean biogeochemistry. This model integration was spun up from rest for 300 years to approach a quasi-steady state by using a climatological forcing (based on atmospheric observations and reanalysis data) as repeated normal year forcing. Heat, freshwater, and momentum fluxes at the sea surface are based on the atmospheric data sets developed by and implemented following the CORE-II protocol (Coordinated Ocean-ice Reference Experiment) suggested by .
The oceanic and sea ice tracer fields (such as potential temperature, salinity, and ice area) resulting from the end of this 300-year-long spin-up run were used to initialize the CESM1.2(BGC+PLA) preindustrial control simulation. The biogeochemical tracer fields (such as nutrients) were, i.a., initialized from climatologies. For instance, initial nutrient (phosphate, nitrate, silicate) distributions were taken from the World Ocean Atlas 2009 (WOA09; Garcia et al.2010), initial values for dissolved inorganic carbon and alkalinity are from the Global Ocean Data Analysis Project (GLODAP; Key et al.2004), and zooplankton, phytoplankton pools, and dissolved organic matter have been initialized uniformly at low values . Additionally, each planktonic foraminiferal species was also initialized uniformly at low values assuming the same (vertical) distribution as the zooplankton component of the BEC model. Furthermore, the atmospheric deposition of iron and dust is based on the climatology of .
The CESM1.2(BGC+PLA) preindustrial control simulation was integrated for 300 years to reach stable conditions in the ocean biogeochemistry in the upper 500 m of the water column (see Fig. S1 in the Supplement). Since this simulation has been forced and/or initialized based on climatologies, interannual variability and forcing trends can be excluded, and therefore we focus our analysis on the model output of only one year, here year 300.
## 2.5 Comparison to observations
To validate the model performance, we compare the simulated spatial and temporal distributions of the considered planktonic foraminiferal species with data from core tops, sediment traps, and plankton tows (Fig. 1). Based on data availability, we focus our analysis on distinct regions distributed over the world ocean covering all climate zones from the poles to the tropics.
Figure 1Locations of (a) the core-top samples with planktonic foraminifera counts and (b) the plankton tow (orange circles) and sediment trap (orange triangles) samples used for the model validation. The map in Fig. 1a shows a combination of the data sets of , , and . The respective information on the sediment trap and plankton tow data shown in Fig. 1b is given in Tables S1 and S2 in the Supplement.
### 2.5.1 Core-top data
To examine the spatial pattern of the five considered planktonic foraminiferal species, we compared the model predictions with fossil data by using in total 2896 core-top samples distributed over all oceans (Fig. 1a). We combined the Brown University Foraminiferal Database with the data assembled by the MARGO project and the data sets provided by . For the comparison, we recalculated the relative abundances of the faunal assemblages by only considering those five species used in PLAFOM2.0. Similarity between the simulated and observed abundances was quantified using the Bray–Curtis index of similarity (bjk in %) between the relative abundances of the core-top data and the modeled data at the respective sample locations.
${b}_{jk}=\left(\mathrm{1}-\frac{\mathrm{1}}{\mathrm{2}}\cdot \sum _{i=\mathrm{1}}^{\mathrm{5}}|{x}_{ji}-{x}_{ki}|\right)\cdot \mathrm{100}\phantom{\rule{0.125em}{0ex}}\mathit{%}$
Here xji and xki are the modeled and observed relative abundances (with values between 0 and 1) of each species i at the given core-top locations, respectively. Note that for the calculation of the modeled relative abundances, we accounted for the different sizes of each individual species by multiplying the modeled annual mean concentration of each species with an estimate of their relative sizes (Table 2).
### 2.5.2 Sediment trap data
To compare modeled and observed seasonal production patterns, several sediment traps (Table S1, Fig. 1b) have been examined. Those can provide foraminiferal shell fluxes continuously collected over several months or even years. However, some sediment traps comprise only a few months (i.e., less than a year) and might have just recorded local short-term processes of a particular season or year and thus cannot provide a long-term and/or climatological mean.
Here we use the same approach as in and present the observed fluxes for multiple years from every location on a log10 scale versus day of year, whereby the zero fluxes have been replaced by half of the observed minimum flux to visualize the results. In this way, we can directly compare the peak timings of the measured fluxes at each location with the model, whereby we assume that the flux through the water column (in no. m−2 day−1) is proportional to the volume-integrated model concentrations (in mmol C m−3).
### 2.5.3 Plankton tow data
To analyze the vertical distribution, plankton net hauls from different sites distributed across the world ocean (Table S2, Fig. 1b) have been used for a comparison with the simulated vertical distributions. Plankton tow samples have been collected by means of a multiple opening–closing net with a vertical resolution differing between 5 depth levels (one haul) and up to 13 depth levels (two or more consecutive hauls) resolving the upper hundreds of meters of the water column. Since the plankton tow data have been collected during a particular time (i.e., a specific day or month; Table S2), the same month has been considered for the simulated vertical planktonic foraminifera profile for the model–data comparison.
Table 2Relative sizes of the analyzed planktonic foraminifera species based on estimates of species size ranges from averaged over the sample locations in that study.
Here we followed the same approach as and calculated an average living depth (ALD) and the vertical dispersion (VD) around the ALD to provide a direct comparison with the modeled depth profile. The ALD (in m) is defined as follows:
$\mathrm{ALD}=\frac{{\sum }_{i}{C}_{i}\cdot {D}_{i}}{{\sum }_{i}{C}_{i}},$
with Ci being the foraminiferal species concentration (in no. m−3) in the depth interval Di and VD (in m) is calculated as
$\mathrm{VD}=\frac{{\sum }_{i}\left(|\mathrm{ALD}-{D}_{i}|\cdot {C}_{i}\right)}{{\sum }_{i}{C}_{i}}.$
For further information, we refer to .
3 Results
## 3.1 Modeled horizontal distribution patterns
The modeled global spatial distribution patterns based on the depth-integrated annual mean relative abundances of the five considered foraminiferal species (Fig. 2) correspond to the five major provinces of the modern ocean (i.e., polar, subpolar, transitional, subtropical, and tropical) known to be inhabited by those species . Note that since the core-top data used for comparison provide information neither on the depth habitat of the planktonic foraminiferal species nor on their life cycle, the modeled annual mean relative abundances have been obtained by integrating the individual foraminiferal concentrations over the whole water column and by subsequently calculating the percentage of each species relative to the modeled total foraminiferal carbon biomass, whereby we also accounted for the different sizes of each species (Table 2).
Figure 2(a) Bray–Curtis index of similarity (in %) between the relative abundances of the modeled and core-top data as well as relative abundances of the depth-integrated modeled annual mean concentration (pale-colored contours; in % carbon biomass) and of the core-top samples (circles; in % individuals) for (b) N. pachyderma(c) N. incompta(d) G. bulloides(e) G. ruber (white), and (f) T. sacculifer. The relative abundances consider only the five foraminiferal species included in PLAFOM2.0. In addition, to account for the different sizes of each foraminiferal species, we multiplied the modeled annual mean concentration of each species with their relative size (Table 2) and subsequently calculated the depth-integrated species annual mean concentrations relative to the total modeled foraminiferal carbon biomass. Note that we are aware that for a small number of core-top samples the relative abundances of the individual planktonic foraminiferal species are overestimated due to recalculations only considering N. pachyderma, N. incompta, G. bulloides, G. ruber (white), and T. sacculifer rather than the whole assemblage. However, the overall general pattern does not change and can thus be used for the model–data comparison.
For a direct comparison of the observed (i.e., the core-top data) and modeled foraminiferal community composition the Bray–Curtis index of similarity was used. The comparison reveals generally a good fit between the simulated and sedimentary assemblage composition with a median Bray–Curtis similarity of ∼68 %. The fit is particularly good in the high latitudes and in the tropics (Bray–Curtis similarity >80 %) and only a few regions (off South America and southern Africa, in the equatorial and North Pacific, and in the eastern North Atlantic) reveal a poorer agreement with similarities of <50 % (Fig. 2a).
Figure 3Violin plots showing the dependence of the modeled peak timing (top row) and/or the modeled peak amplitude (bottom row) on the annual mean temperature (in C) averaged over the top 55 m of the water column for (a) N. pachyderma(b) N. incompta(c) G. bulloides(d) G. ruber (white), and (e) T. sacculifer. The blue dots represent the respective median values. The modeled peak timing is given in months and the modeled peak amplitude has been log transformed. Note that the peak timings of each species from the Southern Hemisphere have been transformed to Northern Hemisphere equivalents by adding or subtracting 6 months.
In the simulation, the cold-water species N. pachyderma is confined to the high latitudes dominating the polar waters of both hemispheres. Neogloboquadrina pachyderma shows the highest modeled annual mean relative abundances (>90 %) north of the Arctic Circle and south of the Antarctic Convergence, whereas toward the subtropics the species occurrence in the model is reduced gradually (Fig. 2b). Neogloboquadrina incompta occurs mainly in the subpolar to transitional water masses of the world ocean in the simulation. This species shows the highest modeled annual mean relative abundances in the latitudinal belt at around 45 N and/or 45 S (Fig. 2c). Globigerina bulloides also occurs in the subpolar to transitional waters of the world oceans with the highest modeled annual mean relative abundances (>60 %) occurring in the Southern Ocean and in the subpolar gyres (Fig. 2d). In the upwelling region of the equatorial Pacific and in the coastal upwelling systems associated with the cold eastern boundary currents of the Atlantic and Pacific oceans, G. bulloides is found with modeled annual mean relative abundances of <40 %. In the simulation, the warm-water species G. ruber (white) is mostly confined to the subtropical and tropical regions of both hemispheres, whereby the highest modeled annual mean relative abundances of up to 60 % are reached in the subtropical gyres (Fig. 2e). The lowest modeled annual mean relative abundances can be found in the ocean's upwelling areas, especially in the equatorial Pacific cold tongue, where G. ruber (white) appears to be almost absent. The modeled distribution pattern of T. sacculifer is limited to the warm waters of the subtropics and tropics and is similar to the one of G. ruber (white). Trilobatus sacculifer shows the highest modeled annual mean relative abundances (>60 %) in the equatorial Pacific between 15 N and 15 S and exhibits low modeled annual mean relative abundances (<30 %) in the coastal upwelling regions of the ocean basins (Fig. 2f).
Figure 4Depth transects of the modeled annual mean concentration (in mmol C m−3) along ∼27 W in the Atlantic Ocean (top row), ∼71 E in the Indian Ocean (middle row), and ∼162 W in the Pacific Ocean (bottom row) over the top 300 m for (a) N. pachyderma(b) N. incompta(c) G. bulloides(d) G. ruber (white), and (e) T. sacculifer. The black contour lines indicate the annual mean temperature estimates (in C). The blank areas denote where a species is absent.
## 3.2 Modeled seasonal distribution
For each foraminiferal species, the month of modeled maximum production changes on average with temperature and consequently with latitude (Fig. 3, Fig. S2). In the simulation, there is a general tendency for the maximum production peak of the cold-water species N. pachyderma to occur later in the year (i.e., during summer) for lower annual mean temperatures (Figs. 3a and S2a). With increasing mean annual temperatures, however, the modeled peak timing occurs earlier in the year (i.e., during spring) (Fig. 3a). For N. incompta, modeled maximum production is reached during late summer in the midlatitudes at lower temperatures and is shifted towards spring and/or early summer when temperatures increase (Fig. S2b). In the low latitudes at high temperatures, however, N. incompta exhibits a constant flux pattern throughout the year (Fig. 3b). The modeled peak timing of G. bulloides is similar to the modeled peak timing of N. incompta, and the highest modeled fluxes are reached later (earlier) in the year in the midlatitudes at lower (higher) temperatures (Fig. S2c). In the warm waters (of the tropics), G. bulloides exhibits year-round a rather uniform flux pattern (Fig. 3c). In the model, both N. incompta and G. bulloides show indications of a double peak in their timing that is shifted towards the first half of the year when temperatures rise (Fig. 3b and c). This earlier-when-warmer pattern is also indicated in the modeled peak timing of N. pachyderma (Fig. 3a). Globigerinoides ruber (white) shows a uniform flux pattern year-round in the warm waters of the world ocean in the subtropical–tropical regions (Fig. S2d). In colder waters (e.g., towards higher latitudes), modeled peak fluxes of G. ruber (white) are reached in late summer and/or fall (Fig. 3d). A similar seasonal pattern in the modeled peak timing is evident for the tropical species T. sacculifer with constant fluxes occurring year-round at high temperatures in the low latitudes (Fig. S2e). At lower ambient temperatures, modeled peak fluxes of T. sacculifer occur during fall (Fig. 3e). For both G. ruber (white) and T. sacculifer, the modeled peak timing is shifted to later in the year when the surroundings become colder (Fig. 3d and e).
To allow for a global comparison of the modeled and observed flux seasonality, we standardized peak amplitudes for each foraminiferal species, i.e., the species maximum concentration divided by its annual mean. This reveals that the timing of the modeled foraminiferal peak abundances varies with temperature, but all five species exhibit an almost constant peak amplitude in their preferred thermal habitat. Outside their preferred living conditions, modeled peak amplitudes considerably increase for most of the species (Fig. 3), and thus the species experience a strong deviation from their annual mean living conditions and likely occur only at times when the ambient conditions are (close to) their optima. For the warm-water species G. ruber (white) and T. sacculifer, peak amplitudes rise when the ambient temperatures fall below 20C (Fig. 3d and e). The peak amplitude of G. bulloides increases noticeably with mean annual temperatures falling below 10C (Fig. 3c). By contrast, when ambient temperatures exceed 25C, the peak amplitude of N. incompta increases (Fig. 3b). For the cold-water species N. pachyderma, the relation between peak amplitudes and mean annual temperatures is more complex (Fig. 3a).
## 3.3 Modeled vertical distribution
Among the three major ocean basins, the modeled vertical distribution of each considered planktonic foraminiferal species shows similar patterns in the annual mean (Fig. 4). The temperate to cold-water species (i.e., G. bulloides, N. incompta, and N. pachyderma) occur from the surface down to about 200 m of water depth (Fig. 4a, b, and c). Neogloboquadrina pachyderma is consistently present in the top few hundred meters of the water column in the high latitudes and absent in the subtropical–tropical regions. In the polar waters of the three ocean basins, modeled maximum annual mean concentrations are found at the surface and deeper toward lower latitudes. The highest modeled annual mean concentrations of N. pachyderma are, however, located in the subpolar gyres between 0 and 75 m of water depth (Fig. 4a). Neogloboquadrina incompta is in general present between 60 N and 60 S with the modeled annual mean concentration reaching its maximum at around 100 m of water depth. In the middle to higher latitudes, N. incompta is found from the surface to ∼200 m of water depth in the Atlantic, Indian, and Pacific oceans, but seems to be rarely present in the respective uppermost water layers (i.e., between 0 and ∼75 m) of the tropics. However, the modeled annual mean concentration increases with depth, especially from the subpolar regions toward the Equator (Fig. 4b). As for N. incompta, G. bulloides has been consistently found from the surface to ∼200 m of water depth between about 60 N and 60 S (Fig. 4c). Depending on the ocean basin, modeled maximum annual mean concentrations of G. bulloides are either mainly reached at the surface (i.e., in the Indian and Pacific oceans) or at depth (i.e., in the Atlantic Ocean), but also at around 100 m of water depth in the subpolar regions of the three chosen transects. Both N. incompta and G. bulloides show the highest modeled annual mean concentrations between 30 and 60 latitude (Fig. 4b and c).
The warm-water species G. ruber (white) and T. sacculifer are found between the surface of each ocean basin and ∼100 m of water depth, thus occurring in a shallower depth range compared to N. pachyderma, N. incompta, and G. bulloides (Fig. 4d and e). Among all five planktonic foraminiferal species, G. ruber (white) exhibits on average the highest modeled annual mean concentrations along the transects (Fig. 4). This species is confined to the subtropical–tropical regions of the ocean basins with the highest modeled annual mean concentrations occurring between ∼15 and 30 latitude and the lowest around the Equator (Fig. 4d). Along the three chosen transects, modeled maximum annual mean concentrations of G. ruber (white) are almost consistently reached at the surface in the low latitudes and at around 60 m of water depth in areas where the highest modeled abundance of this species occurs. Trilobatus sacculifer also occurs predominantly between 30 N and 30 S with modeled annual mean concentrations gradually decreasing with depth. Compared to the other planktonic foraminiferal species, T. sacculifer exhibits a rather uniform distribution pattern along the different transects (Fig. 4e) with modeled maximum annual mean concentrations being primarily located at the surface.
## 3.4 Modeled seasonal variability of habitat depth
In the model, the depth of maximum production of each considered planktonic foraminifera changes over the course of a year (Fig. 5). Towards higher latitudes, N. incompta and N. pachyderma show maximum abundances at lower depth levels compared to low and midlatitudes. In the polar regions, N. pachyderma occurs close to the surface during winter and descends through the water column from spring to summer with modeled maximum abundances being reached at ∼40 m of water depth in summer. In the subpolar regions, N. pachyderma is generally found between 50 and 100 m of water depth for almost the entire year except for the winter season when the highest modeled concentrations are reached close to the surface (Fig. 5a). The modeled depth habitat of N. incompta increases from spring to summer and is shallower in winter in the subpolar regions (Fig. 5b). In the subtropics and tropics, however, N. incompta shows the highest modeled concentrations year-round consistently below 90 m of water depth.
Figure 5Zonal average of the depth (in m) at which the modeled maximum production of (a) N. pachyderma(b) N. incompta(c) G. bulloides(d) G. ruber (white), and (e) T. sacculifer occurs over time. The black contour lines indicate the zonal average of the (seasonally varying) depth of the chlorophyll maximum (in m). The blank areas denote where a species is absent.
Globigerina bulloides exhibits a relatively shallow habitat (i.e., up to ∼50 m of water depth) along the Equator throughout the year (Fig. 5c). In the subpolar regions, the depth of the modeled maximum production of G. bulloides varies seasonally and, similar to N. incompta, is shallower during winter and deepest during summer. The modeled depth habitat of G. ruber (white) is mostly confined to the top 60 m of the water column and seems to be less variable compared to the temperate and cold-water species (Fig. 5). In the midlatitudes and near the Equator, the highest modeled concentrations of G. ruber (white) occur close to the surface during almost the entire year, whereas in the subtropical–tropical regions, this species is most abundant below 20 m and shows a weak seasonal cycle, occurring deeper in late summer and/or early fall (Fig. 5d). Trilobatus sacculifer exhibits the least variable depth habitat in the simulation among the five considered species and is consistently found close to the surface above 20 m of water depth throughout the year (Fig. 5e).
4 Discussion
## 4.1 Large-scale patterns
### 4.1.1 Geographical range of planktonic foraminifera species
The predicted global distribution patterns of the five considered planktonic foraminiferal species are in good agreement with the core-top data (Fig. 2a). This is remarkable considering the simplifications that had to be used to facilitate the comparison, such as the use of a constant biomass to size scaling within a species and a constant size scaling among the species.
Neogloboquadrina pachyderma is most abundant in the polar–subpolar waters of the Northern Hemisphere and Southern Hemisphere both in the model and in the core-top samples (Fig. 2b). This cold-water species dominates the waters north of the Arctic Circle and south of the Antarctic Convergence with relative abundances exceeding 90 % and is very rarely found in subtropical–tropical waters, which is also seen in the model output. (1969), , and showed that N. pachyderma mainly occurs in regions with sea surface temperatures (SSTs) below 10C, but is also present in the cold–temperate waters of, e.g., the subpolar gyres with relative abundances being reduced to 30–50 %. Thus, in areas influenced by warmer waters the abundance of this species decreases gradually. This is especially evident in the eastern North Atlantic Ocean, where the abundance of N. pachyderma is reduced to about 50 % due to the influence of the warm Atlantic Water, which is transported northward by the North Atlantic Current (NAC) . In line with the observations, the modeled annual mean relative abundances of N. pachyderma also decrease with decreasing latitude and are hence reduced towards warmer surface waters (Fig. 2b). Additionally, PLAFOM2.0 is able to reproduce the observed species abundance pattern in the North Atlantic with a reduced relative abundance of < 30 % in the area, which is influenced by the NAC. Similar to PLAFOM (see Fraile et al.2008) a slight deviation between the simulated and observed relative abundances of N. pachyderma at the edge of the species distribution pattern is observed in the Northern Hemisphere. It has been shown that distinct genotypes discovered within this morphologically defined species exhibit different ecological preferences . Thus, the abovementioned minor discrepancy might partly arise due to the underlying model parameterizations, which are mainly based on the environmental preferences (i.e., temperature tolerance limits) of the N. pachyderma genotypes found in the Southern Ocean (for more details see Fraile et al.2008), which differ genetically from the genotypes found in the North Atlantic and North Pacific oceans .
The modeled global distribution patterns of N. incompta and G. bulloides agree to a broad extent with the observations (Fig. 2c and d). Both species are predominantly found in the subarctic, subantarctic, and transitional waters of the world oceans (with relative abundances >50 %), where SSTs range between 10 and 18C . They are also highly abundant in the cool eastern boundary currents off Africa and South America (e.g., Bé and Tolderlund1971; Giraudeau1993; Darling et al.2006) as well as in the eastern North Atlantic and occur continuously in a subantarctic belt between 30 S and the Antarctic Convergence . In addition, high abundances (> 40 %) of N. incompta have been observed in the equatorial Pacific upwelling system and of G. bulloides in the Arabian Sea. In the model, N. incompta is confined to the subpolar belts at around 45 latitude, which matches the general distribution pattern seen in the core-top data, but the relative abundance is underestimated (here N. incompta accounts for < 20 % of the modeled assemblage compared to up to 50 % in the observations; Fig. 2c). The model prediction for G. bulloides shows, in accordance with the core-top samples, higher abundances in the subantarctic belt (here the species accounts for up to 80 % of the modeled assemblage) and in the (coastal) upwelling regions of the Atlantic and Pacific oceans (Fig. 2d). PLAFOM2.0, however, fails to fully capture the relative abundances in areas where the assemblages are usually dominated by N. incompta and G. bulloides (Fig. 2c and d). For instance, in the Benguela upwelling system, N. incompta and G. bulloides together account locally for > 60 % of the total planktonic foraminifera population , whereas in the model, both species account for <40 % of the assemblage. In fact, N. incompta is almost absent in the model simulation outside of the subpolar belts. Furthermore, in the western Arabian Sea, the modeled annual mean relative abundance of G. bulloides ranges between 10 and 20 %, which corresponds to the lower end of the observed range varying between 20 and ∼ 50 % . Additionally, it is evident that the model slightly overestimates the relative abundance of G. bulloides in the central subtropical–tropical waters of the ocean basins (Fig. 2d). The apparent discrepancies between the observations and PLAFOM2.0 arise firstly due to an overestimation of the modeled annual mean relative abundances of G. bulloides, in particular in the subpolar belt at around 45 N, and of G. ruber (white) and T. sacculifer, especially in the upwelling regions, and/or due to the overall underestimation of the occurrence of N. incompta outside the subpolar belts. Secondly, since the model parameterizations are performed on a global scale, distinct genotypes (possibly having different environmental preferences) of N. incompta and especially of G. bulloides (e.g., Kucera and Darling2002; Morard et al.2013) cannot be included in detail in the model, potentially resulting in the model–data mismatch.
The simulated global distribution patterns of G. ruber (white) and T. sacculifer compare favorably with the core-top samples (Fig. 2e and f). Both species dominate the subtropical and tropical waters of the global ocean, together accounting for 75–100 % of the total planktonic foraminiferal fauna . Globigerinoides ruber (white) is the most abundant species in the subtropical areas where SSTs range between 21 and 29C, whereas T. sacculifer shows highest relative abundances (> 50 %) in the tropics with SSTs between 24 and 30C . Additionally, G. ruber (white) is also highly abundant (>50 %) compared to T. sacculifer along the continental margins of the low latitudes (Fig. 2e and f). However, in the coastal upwelling regions, G. ruber (white) and T. sacculifer are rarely found as cooler water masses influence their usual habitat (e.g., Thiede1975). Since both species thrive in warmer waters, their (relative) abundance gradually diminishes when transported towards the higher latitudes, thus being absent in the subpolar–polar regions of the ocean basins. The model predictions for G. ruber (white) and T. sacculifer show in general similar patterns as the observations with higher loadings in the subtropical and tropical regions and a gradual decrease in occurrence toward the poles (Fig. 2e and f). PLAFOM2.0 is also able to reproduce the dominance of G. ruber (white) in the subtropics and of T. sacculifer around the Equator; together both species account for > 70 % of the modeled assemblage in the warm waters of the world ocean. Additionally, the reduction in the (relative) abundances in the upwelling regions (i.e., along the equatorial Pacific and the coasts of South America and Africa) is likewise captured by the model. However, in provinces dominated by G. ruber (white) and T. sacculifer, the relative abundances are underestimated in the model, whereas in the coastal upwelling regions, the species abundances are slightly overestimated compared to the observations. Such deviations may result from the overestimation and/or underestimation of G. bulloides and N. incompta in the tropical–subtropical and upwelling regions (Fig. 2c and d) or from the 1 model resolution leading to an inadequate representation of the coastal upwelling regions.
Thus, we consider that part of the model–data mismatch may arise from uncertainty in the conversion of biomass to (relative) abundance, which is based on constant offsets approximated from sparse data (see ). Likely an even larger part of the discrepancies between the model and core-top data stems from the underlying model parameterizations applied on a global scale, which do not distinguish between distinct genotypes of the different species with potentially varying ecological preferences. Theoretically, this problem could be solved by parameterizing all known genotypes individually and approximating the total morphospecies abundance as the sum of its constituent genotypes. This would allow for a comparison with sediment data, but not for a diagnosis, since the sediment data provide no information on which genotypes are contained in the assemblages. Interestingly, the generally fair fit between the model and observations suggests that ecological differences between cryptic species are likely limited and that the model provides a useful first-order approximation of global species distribution.
### 4.1.2 Seasonality of planktonic foraminifera species
The meta-analysis of , which is based on sediment trap data, revealed that the (spatially varying) seasonality of individual planktonic foraminifera is predominantly related to either temperature or the timing of primary productivity. For the temperate and cold-water species, such as G. bulloides, N. incompta, and N. pachyderma, one or two flux maxima have been observed, which occur earlier in the year at higher temperatures. This seasonal pattern is also to a large degree evident in the model results (Figs. 3a–c and S2a–c). At lower temperatures (below 5C), the modeled season of maximum production for the cold-water species N. pachyderma is predominantly reached in (late) summer, whereas in the comparatively warmer subpolar and transitional waters, the modeled peak season is shifted towards spring (Figs. 3a and S2a). A similar pattern can be observed for N. incompta and G. bulloides. In line with , none of the three species shows a clear peak amplitude dependency on temperature (Fig. 3a–c). In the model, the temperate and cold-water species exhibit a shift in their peak timing, but do not considerably change their peak amplitude (except for G. bulloides when temperatures fall below 5C). Hence, the observed and predicted earlier-when-warmer pattern can most likely be sought to a large extent in the timing of primary productivity rather than in a temperature dependence. Several studies showed that the seasonality of temperate and cold-water planktonic foraminiferal species is closely tied to phytoplankton bloom events leading to an increased food supply (e.g., Fairbanks and Wiebe1980; Donner and Wefer1994; Wolfteich1994; Kohfeld et al.1996; Mohiuddin et al.2002, 2004, 2005; Northcote and Neil2005; Asahi and Takahashi2007; Storz et al.2009; Wilke et al.2009; Jonkers and Kučera2015). In particular, the flux of G. bulloides reaches highest values in response to an increased food supply to a large extent associated with open ocean and/or coastal upwelling (e.g., ; ; ; ; ; ; ). The warm-water species G. ruber (white) and T. sacculifer exhibit relatively uniform annual flux patterns with almost no seasonal peak in the subtropical–tropical regions of the ocean basins (e.g., Deuser et al.1981; Jonkers and Kučera2015). Similar to observations, the modeled timing of the low-amplitude peaks is random during the year in warm waters (Figs. 3d–e and S2d–e). However, in colder waters, peak fluxes are concentrated towards fall and peak amplitudes increase considerably both in the observations and in the model (Figs. 3d–e and S2d–e). This shift in seasonality can most likely be linked to temperature. In the low latitudes, optimum temperatures prevail year-round, whereas further northward or southward those optimum thermal conditions occur only during a short period later in the year. Thus, those species focus their flux into the warm season in colder waters (Fig. 3d–e). This emerging behavior is consistent with observations from sediment traps and suggests that the seasonality of warm-water species is driven by temperature rather than food availability, which is in agreement with observational studies (e.g., Wolfteich1994; Eguchi et al.1999, 2003; Kincaid et al.2000; Kuroyanagi et al.2002; Mohiuddin et al.2002, 2004; Storz et al.2009; Jonkers and Kučera2015).
### 4.1.3 Spatial and temporal variability of depth habitats of planktonic foraminifera species
The modeled depth habitats of N. pachyderma, N. incompta, G. bulloides, G. ruber (white), and T. sacculifer differ and show (distinct) spatial and temporal variability in response to different environmental conditions (Figs. 4 and 5). Plankton tow studies have shown that the vertical distribution of planktonic foraminifera is mostly affected by temperature, primary productivity, light availability, and the thermal or density stratification of the upper water column (e.g., Fairbanks et al.1982; Ortiz et al.1995; Schiebel et al.2001; Field2004; Kuroyanagi and Kawahata2004; Salmon et al.2015; Rebotim et al.2017).
In line with the observations, the modeled depth distribution patterns indicate that the warm-water species G. ruber (white) and T. sacculifer occur at shallower depths compared to the temperate and cold-water species G. bulloides, N. incompta, and N. pachyderma (see Figs. 4 and 5). In the model, both G. ruber (white) and T. sacculifer have been consistently found from the surface to ∼100 m of water depth in the subtropical–tropical regions of the ocean basins (Fig. 4d–e). In the tropics, they are most abundant close to the surface, which agrees well with the observations. In the Arabian Sea and in the central tropical Pacific Ocean, both species have been mostly found in the upper 60 m . In the transitional and subtropical waters, however, PLAFOM2.0 slightly underestimates the depth habitat of G. ruber (white) and T. sacculifer (Figs. 4d–e and 5d–e) as they inhabit the upper 125 m in the western North Atlantic and/or consistently occur from 0 to 200 m of water depth in the subtropical eastern North Atlantic or in the seas surrounding Japan . Nevertheless, both species typically live close to the surface (above 100 m) (e.g., Bé and Hamlin1967; Fairbanks et al.1982; Kemle-von Mücke and Oberhänsli1999; Schiebel et al.2002; Wilke et al.2009; Rippert et al.2016), thus being associated with a shallow depth habitat, which is reproduced by the model. Since T. sacculifer and G. ruber (white) are algal symbiont-bearing species, they are most abundant in the photic zone where light intensities are highest, but chlorophyll a concentrations and temperature also control their habitat. Light intensity is especially important for the growth of T. sacculifer , whereas G. ruber (white) seems to be more affected by food availability rather than light. This is to some degree also indicated in our results, as on average the highest modeled concentrations of T. sacculifer occur at shallower depths compared to G. ruber (white) (see Figs. 4d–e and 5d–e). However, at some locations both the model and observations show the reverse (see Fig. S4 and, e.g., Rippert et al.2016; Rebotim et al.2017), indicating that this depth ranking is not globally valid. In comparison with the temperate and cold-water species, G. ruber (white) and T. sacculifer are most abundant in the model in waters with temperatures above 22C and absent where temperature values drop below 15C (see Fig. 4), reflecting the different temperature tolerance limits of the two species.
Neogloboquadrina pachyderma, N. incompta, and G. bulloides generally thrive in cold to temperate waters. In the model, the depth habitat of those species decreases with increasing latitude (Fig. 4a–c), indicating a preferred habitat in the subsurface (see Fig. 5a–c). This is consistent with the observations from several locations where the three species have typically been found between 50 and 200 m of water depth (e.g., Kohfeld et al.1996; Mortyn and Charles2003; Kuroyanagi and Kawahata2004; Bergami et al.2009; Wilke et al.2009; Pados and Spielhagen2014; Iwasaki et al.2017; Rebotim et al.2017). In the subtropical to subpolar regions, the highest modeled concentrations of G. bulloides occur between 60 and 100 m, whereas in the tropics, maxima are reached close to the surface (Figs. 4c and 5c). This agrees well with the observations: G. bulloides has been found to be tightly linked to phytoplankton bloom events occurring either at deeper depth layers associated with a deep chlorophyll maximum (DCM) or in the coastal and equatorial upwelling regions where a shoaling of the species habitat towards the near surface can also be related to high chlorophyll a concentrations . Neogloboquadrina incompta is also highly abundant where chlorophyll a concentrations are high, but nevertheless has most often been observed at mid-depth . In the model, N. incompta also shows the highest concentrations between 30 and 120 m (Figs. 4b and 5b), clearly inhabiting the subsurface. This is especially evident in the tropics, where N. incompta is virtually absent in the near-surface layers, but present, albeit in low numbers, around 100 m of water depth. The predictions show, in general, that N. incompta prefers warmer waters compared to N. pachyderma and, where the species coexist, N. incompta inhabits shallower depths for this reason (Figs. 4a–b and 5a–b). This agrees with the observations from the subarctic Pacific and the seas around Japan . Neogloboquadrina pachyderma is confined to the high latitudes with peak abundances occurring in the upper 100 m of the water column (partly associated with high chlorophyll a concentrations), which agrees well with the model results. Although N. pachyderma has been classified as a “deep dweller” in different studies , this species appears to be more surface-restricted at higher latitudes , which is also evident in the model results (Figs. 4a and 5a).
Several studies showed that the depth habitat of planktonic foraminifera varies throughout the year in response to changing environmental conditions. identified an annual cycle in the habitat of T. sacculifer and N. incompta in the subtropical eastern North Atlantic. Both species appear to descend in the water column from winter to spring and reach their deepest habitat in spring to summer before ascending again to a shallower depth towards winter . It has been suggested that N. incompta is affected by chlorophyll a concentrations, and hence the seasonal shift in its habitat depth could be related to food availability as a DCM develops in the summer months. In the Canary Islands region, G. ruber (white) and G. bulloides have been found at lower depth levels during winter, and during summer and/or fall shell concentrations were highest at depth associated with the DCM . However, G. ruber (white) did occur at moderate abundance levels throughout the year, whereas G. bulloides was only present in low numbers during wintertime in the study area of . investigated the influence of a changing hydrography on the habitat of living planktonic foraminifera in the northwest Arabian Sea. During the southwest monsoon (occurring in summer), strong coastal upwelling associated with low SSTs and a near-surface chlorophyll maximum leads to high abundances of G. bulloides dominating the species assemblage in the uppermost part of the water column . In comparison, during the northeast monsoon (occurring in winter), a relatively warm nutrient-depleted surface mixed layer and a DCM develop, resulting in high concentrations of G. ruber (white) and T. sacculifer near the surface, whereas the concentrations of G. bulloides are low and show a subsurface maximum between the DCM and the thermocline . Based on their findings, conclude that the habitat depth of individual foraminifera strongly depends on the local hydrography controlling, i.a., the food availability. also found high abundances of G. bulloides in the equatorial surface waters of the Pacific Ocean associated with higher primary productivity due to an intensified upwelling, but also with the zonal advection by the South Equatorial Current during La Niña conditions. In contrast, during El Niño conditions, G. bulloides has been absent in the central tropical Pacific due to unfavorable living conditions.
The change in the depth of the modeled maximum production of each considered planktonic foraminifera throughout a year (Fig. 5) agrees to a large extent with the observations. Neogloboquadrina pachyderma is almost constantly found below 50 m except during winter when the highest modeled concentrations occur close to the surface (Fig. 5a). The shift in the simulated habitat depth most likely indicates that N. pachyderma is highly dependent on food availability (Fig. 5a), which coincides with observational studies in which this species has been extensively found at mid-depth during summer associated with the chlorophyll maximum . The simulated change from a deeper to a shallower depth habitat of N. incompta in the subpolar regions over the course of a year could be strongly affected by the food supply by potentially following the seasonal distribution of phytoplankton. In the low latitudes, modeled maximum concentrations of N. incompta are constantly reached below 90 m of water depth, which might be attributed to the presence of a permanent DCM (Fig. 5b), a characteristic feature throughout the low latitudes . Globigerina bulloides, however, is found year-round close to the surface along the Equator in the model (Fig. 5c), which, in line with the observations, can be associated with equatorial upwelling, but the inclusion of the photosynthetic growth rate in the model could also explain the occurrence of modeled maximum concentration values at lower depth levels due to higher light requirements compared to N. incompta. In the subpolar regions, the simulated depth habitat of G. bulloides varies seasonally, most likely following the chlorophyll maximum (Fig. 5c). The model simulation indicates that the seasonal occurrence of both G. ruber (white) and T. sacculifer in colder regions, where they face suboptimal environmental conditions, is limited to the warm surface layer during the warm season (Fig. 5d–e). Even in the low latitudes, both species exhibit a weak seasonal cycle in their simulated depth habitat, which is more pronounced for G. ruber (white) (Fig. 5d), indicating some influence of primary productivity, which also agrees with the observations . In line with , our results suggest that T. sacculifer seems to prefer living in warmer waters than G. ruber (white) year-round (Fig. 5e) and is most abundant at shallow depths where the light intensity is highest. Our results thus confirm the observations by that both G. ruber (white) and T. sacculifer adapt to changing environmental conditions by adjusting their seasonal and vertical habitat to local circumstances. This emerging behavior can have important implications for paleoceanographic reconstructions .
We find that the modeled depth habitats of the five considered foraminiferal species are in agreement with the relative ranking of their apparent calcification depths, but the inferred absolute values of calcification depth are often deeper or show a broader range of depths (e.g., Carstens and Wefer1992; Kohfeld et al.1996; Ortiz et al.1996; Bauch et al.1997; Schiebel et al.1997; Ganssen and Kroon2000; Peeters and Brummer2002; Anand et al.2003; Simstich et al.2003; Nyland et al.2006; Jonkers et al.2010, 2013; van Raden et al.2011). This is not surprising because PLAFOM2.0 does not model species ontogeny and cannot capture processes related to ontogenetic depth migration (e.g., Fairbanks et al.1980; Duplessy et al.1981). The same limitation applies to estimates of living depth derived from plankton tow data, which often appear to deviate from apparent calcification depths (e.g., Duplessy et al.1981; Rebotim et al.2017). Nevertheless, as a first essential step in understanding the variability in calcification depths, PLAFOM2.0 provides a powerful tool that can aid the interpretation of proxy records.
## 4.2 Detailed comparison with observations
The emergence of seasonal and vertical habitat patterns consistent with observational data provides important support for our modeling approach. Nevertheless, a more detailed comparison with observations is warranted to gain further insight into the model behavior. However, when comparing observational data and model output, one has to bear in mind several caveats. These can be broadly categorized into four groups: (i) model resolution, (ii) model parameterization, (iii) model hierarchy, and (iv) analytical constraints on the observations.
• i.
The model resolution has limits on temporal and spatial scales when compared to sediment trap and plankton tow data. Most sediment trap time series span at most a few years and hence represent short time series that are potentially aliased and/or biased by interannual, seasonal, and/or monthly variability. Similarly, plankton tow samples represent snapshots (of one particular day) and the prevailing environmental conditions during their actual sampling time cannot be fully captured by the model. In fact, the model is forced using climatological data, thus representing a long-term average response that ignores such short-term variability. Additionally, because of the employed 1 model resolution, only the nearest model grid points rather than the exact locations of the sediment traps and plankton tows (especially along the coastlines) can be considered. This potentially results in different environmental conditions influencing the seasonality and depth habitat of planktonic foraminifera compared to the observations. The observational records are additionally affected by sub-grid phenomena (such as mesoscale eddies and/or steep gradients). For instance, Gulf Stream cold core rings transport large planktonic foraminiferal assemblages into the generally nutrient-poor Sargasso Sea . In addition, found that an increase in zooplankton (including planktonic foraminifera) productivity coincided with an increase in phytoplankton biomass in a cold-core eddy in the eastern North Atlantic. The 1 resolution of the underlying model configuration leads to an inadequate representation of such sub-grid processes and thus their impact cannot be fully reflected by the CESM1.2(BGC+PLA) configuration.
• ii.
The underlying model parameterizations used in PLAFOM2.0 are limited in regard to taxonomic resolution and species ontogeny. Different genotypes of one species could exhibit different habitat preferences (e.g., Kuroyanagi and Kawahata2004), which are not captured by PLAFOM2.0 since the model parameterizations do not resolve the different known genotypes of the considered planktonic foraminiferal species. Several studies from different areas also showed that the main habitat depth of some species increases from the surface to deeper water layers during shell growth . This vertical migration of planktonic foraminifera during ontogeny cannot be reproduced by PLAFOM2.0 as the model parameterizations do not include the individual species life cycles.
• iii.
The underlying complex model configuration consists of three major model components (i.e., the POP2 ocean model, the BEC ecosystem model, and PLAFOM2.0), which follow a certain model hierarchy by interacting differently with each other. Both the BEC model and PLAFOM2.0 run within POP2 (see Moore et al., 2013; Lindsay et al., 2014; this study), which provides the temperature distribution used to determine, i.a., the phytoplankton, zooplankton, and/or foraminifera carbon concentrations. It was shown that POP2 exhibits several temperature biases (e.g., Danabasoglu et al.2012, 2014). These include large warm SST biases originating in the coastal upwelling regions of North and South America and of South Africa, colder-than-observed subthermocline waters in the equatorial Pacific and cold temperature biases of up to 7C in the North Atlantic emerging throughout the water column (see Fig. S5 and Danabasoglu et al.2012, 2014). These temperature biases influence the foraminiferal distributions directly and indirectly by affecting the distributions of their food sources in the BEC model. In addition, the BEC model also exhibits several biases, such as higher-than-observed (lower-than-observed) surface nutrient and chlorophyll concentrations at low (high) latitudes , implying potential misrepresentations of the modeled phytoplankton and zooplankton distributions, likely influencing the foraminiferal carbon concentrations. The inferred importance of temperature and food availability (estimated by POP2 and/or the BEC model) in PLAFOM (see Fraile et al.2008; Kretschmer et al.2016) for the distribution of planktonic foraminifera implies that each model component is important for an accurate representation of the foraminifera distribution. Therefore, it is difficult to unequivocally differentiate between the different model components of the CESM1.2(BGC+PLA) model configuration and their individual share, likely leading to the model–data mismatch.
• iv.
The analytical constraints regarding the observational records include drift due to (sub-grid) ocean processes, distinction between live and dead specimens, collection depths, and taxonomic agreement among different studies. For instance, a few sediment trap samples might be compromised due to the collection of sinking particles derived from different regions of the surface ocean being transported through eddies and/or ocean currents . Strong current velocities sometimes associated with eddies could lead to a tilt in the moored sediment trap, resulting in less material being collected by the trap . The impact of eddies might thus hamper the observed season of the maximum production of planktonic foraminifera as well as their average living depth. A further uncertainty in the plankton tow data arises from the identification of living cells because dead cells with cytoplasm collected at depth still appear as living and lead to a shift in the average living depth to greater depth . Uneven sampling intervals of the tows also result in a bias in the observed depth habitat (Fig. S4). Additionally, a taxonomic consistency within the observational data is assumed, which cannot be guaranteed as different researchers have been responsible for the data collection (see Tables S1 and S2).
Figure 6(a) Peak seasons (i.e., caloric season of maximum production), (b) peak amplitudes (i.e., maximum production divided by annual mean), and (c) average living depths (in m) for N. pachyderma (light blue), N. incompta (orange), G. bulloides (green), G. ruber (white) (gold), and T. sacculifer (orchid) based on either the sediment trap data (given in Table S3) or the plankton tow data (given in Table S4) vs. PLAFOM2.0. The symbols represent the polar (squares), subpolar (diamonds), transitional (left-pointing triangles), subtropical (circles), and tropical (upward-pointing triangles) provinces of the ocean. The symbols in (a) indicate the month corresponding to the mid-season and the error bars refer to the overall time frame given in Table S3a. Note that the observed and modeled peak amplitudes in (b) have been log transformed.
With these caveats in mind, we compare the results of PLAFOM2.0 with 26 sediment trap records and 45 plankton tow samples from all oceans (Fig. 1b, Tables S1 and S2). Note that the results of the point-by-point comparative analysis for each site and species are given in the Supplement (see Figs. S3 and S4).
The peak season of the temperate and cold-water species (G. bulloides, N. incompta, and N. pachyderma) is shifted from late summer in the higher latitudes towards spring at the more equatorward-directed locations in the subpolar and transitional water masses both in the model and in the sediment trap records (Fig. 6a, Table S3a). The modeled peak amplitudes of those species remain almost constant at rather low values independent of the considered region. In the sediment traps, however, the peak amplitude values are higher and more diverse, and also no clear pattern is evident for the species or for the provinces changing with latitude (Fig. 6b, Table S3b). In line with the plankton tow samples, N. pachyderma, N. incompta, and G. bulloides occur to a large extent below 50 m of water depth from the cold high latitudes to the warmer provinces. However, the modeled ALDs (ranging between 20 and 100 m) are considerably lower than the observed ALDs, which spread over 250 m (Fig. 6c, Table S4). The warm-water species G. ruber (white) and T. sacculifer occur year-round in the subtropical–tropical regions with no distinct preference for a particular season both in the observations and in the model simulation (Fig. 6a, Table S3a). In the transitional waters, however, their peak fluxes are consistently concentrated in fall, leading to higher peak amplitude values at least in the model (Fig. 6b, Table S3b). Throughout the tropics and subtropics, the modeled peak amplitudes remain constant at low values. In the sediment trap records, however, the peak amplitudes are higher (compared with PLAFOM2.0) and vary within both species and within each province (Fig. 6b). In the tropics, G. ruber (white) and T. sacculifer occur primarily close to the surface with ALDs below 50 m both in the model simulation and in the plankton tow records (Fig. 6c, Table S4). In fact, the predicted ALD values (consistently ranging between the surface and 55 m) are lower in comparison with the observations in the transitional and subtropical waters and accordingly do not exhibit a similar value range as the plankton tow records.
In general, the point-by-point comparison between the observations and the model simulation reveals that the peak seasons are well predicted by PLAFOM2.0. The predicted peak amplitudes and average living depths also show realistic trends, but the model tends to underestimate the magnitude of these trends (Fig. 6). Additionally, some sediment trap flux time series of the temperate and cold-water planktonic foraminiferal species show two seasonal peaks a year (see , and Figs. S3 and 7a). PLAFOM2.0 is not always able to faithfully reproduce this bimodal pattern (cf. Figs. S3 and 7a). In the following, we try to identify the causes of discrepancies between the observations and predictions by comparing the model output with exemplarily chosen sediment trap records and/or plankton tow samples of three different locations in each case (Fig. 7).
Figure 7(a–c) Comparison of export planktonic foraminiferal shell fluxes in sediment traps (grey triangles) with the residuals (i.e., the deviation from the mean) of the depth-integrated modeled foraminiferal carbon biomass (light blue squares). Note that the difference in the units between sediment trap data (in log10[no. m−2 day−1]) and model output (in mmol C m−2) does not affect the assessment of peak timing. The orange circles denote the residuals of the depth-integrated modeled chlorophyll concentration (in mg Chl m−2), and the dark green asterisks indicate in (a) the residuals of the depth-integrated modeled diatom concentration (in mmol C m−2), in (b) the modeled chlorophyll concentration (in mg m−3) at 55 m of water depth, and/or in (c) the residuals of the sum of the depth-integrated modeled diatom and large detritus (i.e., main food) concentrations (in mmol C m−2). (d–f) Comparison of the vertical distribution of live specimens in plankton tows (in no. m−3; grey bars) with the modeled foraminiferal concentration over depth (in mmol C m−3; light blue profiles). The dashed dark grey and blue lines indicate the average living depth (in m) and vertical dispersion calculated for the plankton tows (ALDtow±VDtow) and PLAFOM2.0 (ALDmod±VDmod), respectively. The dashed red lines denote the predicted temperature profiles (in C), whereas the dark green lines correspond to the modeled vertical distribution of (d) diatoms (in mmol C m−3) and/or (e–f) zooplankton (in mmol C m−3). Data series of (a) N. pachyderma at site PAPA, (b) N. incompta at site CP, and (c) G. bulloides at site WAST. Depth profiles of (d) N. pachyderma at station PS55-063, (e) G. ruber (white) at station MOC1-28, and (f) T. sacculifer at station SO225-21-3. The locations of each sediment trap and plankton tow sample are given in Tables S1 and S2, respectively.
The timing of flux pulse(s) of the temperate and cold-water species has, in general, been linked to the timing of the peak in primary productivity (e.g., Fairbanks and Wiebe1980; Donner and Wefer1994; Wolfteich1994; Kohfeld et al.1996; Mohiuddin et al.2002, 2004, 2005; Northcote and Neil2005; Asahi and Takahashi2007; Storz et al.2009; Wilke et al.2009; Jonkers and Kučera2015). It is known from studies of the North Atlantic Ocean that phytoplankton seasonality changes with latitude, featuring a single spring bloom in the polar and subpolar Atlantic, a bimodal pattern (one large peak in spring, one smaller peak in fall) in the temperate North Atlantic, a single fall–winter bloom in the subtropical Atlantic, and no prominent seasonal cycle in the tropical Atlantic (e.g., Colebrook1979, 1982; Taboada and Anadón2014; Friedland et al.2016). The ecosystem model (providing the food information for PLAFOM2.0), however, does not faithfully reproduce the observed seasonal cycle in primary productivity (cf. Fig. 4 in Moore et al.2002b). The simulated (depth-integrated) chlorophyll concentration, used as an indicator for productivity, only in part shows two cycles per year (Fig. 7a–c). Nevertheless, the peak timings of the (depth-integrated) foraminifera concentration follow the maxima in primary productivity. For instance, the modeled maximum production peak of N. pachyderma at site PAPA coincides with a peak in the diatom concentration (Fig. 7a), N. incompta reaches its maximum in the simulation more likely at depth at site CP following a DCM (Fig. 7b), and the predicted spring and/or fall peak of G. bulloides at site WAST occurs slightly after the peak in the main food concentration or the maximum in the chlorophyll concentration (Fig. 7c). However, the ecosystem model seems to underestimate the seasonality in primary productivity, which most likely leads to the model–data mismatch in the seasonal pattern of the planktonic foraminifera concentration. Additionally, the variability of planktonic foraminifera carbon biomass produced by PLAFOM2.0 is in general too low compared to the observations. This mismatch can be explained by misrepresentations of the foraminiferal carbon biomass or of the foraminifera response (to the environmental forcing) in the model parameterizations or by an underestimation of the driving factors (especially the main food sources as outlined above). The depth habitat of planktonic foraminifera depends on several environmental and ontogenetic factors (e.g., Fairbanks and Wiebe1980; Fairbanks et al.1982; Schiebel et al.2001; Simstich et al.2003; Field2004; Salmon et al.2015; Rebotim et al.2017). The simulated vertical distribution patterns can also be related to food availability and temperature (Fig. 7d–f). For instance, at station PS55-063, peak abundances of N. pachyderma are reached in the top 50 m in the model corresponding to the highest diatom concentrations (Fig. 7d). At station MOC1-28, the predicted depth profile of G. ruber (white) coincides with the vertical distribution pattern of zooplankton with maximum concentrations occurring over the top 50 m of the water column (Fig. 7e). At station SO225-21-3, the modeled species concentration of T. sacculifer decreases gradually with depth following the zooplankton distribution, but also temperature (Fig. 7f). However, the simulated depth profiles differ from the observations, which is also indicated by the differences in the ALDs. In PLAFOM2.0, the foraminiferal species do not occur below 200 m of water depth (cf. Figs. 4 and 7d–f), most likely being restricted through food availability and the ambient temperatures. Thus, depending on the vertical resolution of the sampling intervals of a plankton tow sample, the predicted ALD is very likely lower by several meters than the observed ALD. In summary, PLAFOM2.0 is able to reproduce the observed species behavior with regard to time and depth on a local scale, but is strongly dependent on the input variables (e.g., temperature and the different food sources) provided by both the ocean and the ecosystem model and is thus limited in its capability to match the observations.
Keeping the caveats regarding the model resolution, model parameterizations, model hierarchy, and analytical constraints on the observations in mind, the model–data mismatch might be reduced by a higher model resolution (in time and space), which would in turn increase the computational costs. A higher taxonomic resolution of the considered species (resulting in an increased number of passive tracers and likewise degrees of freedom) and an explicit parameterization of the ontogeny of each individual planktonic foraminifera, thus considering the changes in the species life cycles with depth (e.g., Bijma et al.1990a; Bijma1991; Bijma and Hemleben1994; Bijma et al.1994; Hemleben and Bijma1994; Schiebel et al.1997), could considerably improve the model. The discrepancies between the model and the observations could additionally be minimized by including better ecological constraints on planktonic foraminifera species and their habitat, e.g., by introducing more phytoplankton and zooplankton functional groups in the ecosystem model to better resolve species food preferences, which would, however, result in increased computational costs. Nevertheless, additional knowledge about the factors controlling the habitat of planktonic foraminifera in time and space based on culturing experiments and field studies is needed for an optimization and better validation of the current model version. In addition, due to the model complexity it is not trivial to determine which model component (i.e., POP2, BEC, or PLAFOM2.0) contributes to what extent to the model–data mismatch. Determining this would require a suite of sensitivity experiments with each model component, which should be considered for future work. The model nonetheless produces seasonal and vertical abundance patterns that are consistent with our current understanding and which emerge from the model without an explicit parameterization of abundance in time and space. PLAFOM2.0 thus represents a major step forward from the previous model version and can be used to assess paleoclimate information in a better way.
5 Conclusions
A new version of the dynamic planktonic foraminifera model PLAFOM (PLAFOM2.0) has been developed and combined with the CESM1.2(BGC) model configuration to simulate species-specific seasonal and depth habitats for N. pachyderma, N. incompta, G. bulloides, G. ruber (white), and T. sacculifer on a global scale. In comparison with the original approach in which only species concentrations in the surface mixed layer were predicted, PLAFOM2.0 includes a vertical component and thus predicts species distribution patterns in space and time more realistically.
PLAFOM2.0 produces spatially and temporally coherent abundance patterns, which agree well with available observations. The model configuration faithfully reproduces the areal extent of the species. In line with core-top data, the modeled global distribution of each foraminifera changes with latitude. Additionally, PLAFOM2.0 successfully predicts patterns in the timing of the peak fluxes of planktonic foraminiferal species on a global scale. The earlier-when-warmer pattern for the temperate and cold-water species and the flux focusing at low temperatures of warm-water species, as inferred from observations by , have emerged from the model.
Although an explicit parameterization of the vertical dimension is lacking, the model successfully predicts the preferred habitat depth of the individual planktonic foraminiferal species as well as the spatial and temporal variability in the vertical abundance. In accordance with the available observations, the warm-water species G. ruber (white) and T. sacculifer consistently occur close to the sea surface year-round in the tropics–subtropics, whereas the depth habitat of the colder-water species N. pachyderma, N. incompta, and G. bulloides changes seasonally in the polar–subpolar regions. During the cold season these species occur near the surface, while during the warmer season they descend in the water column to be found up to 120 m of water depth or even below, most likely following the chlorophyll maximum.
In general, paleoceanographic reconstructions based on planktonic foraminifera are hampered by the fact that the environmental signal preserved in their shells is the result of both habitat and climate change. The two effects are difficult to separate without independent data. PLAFOM2.0 presents a powerful tool to address this issue and can contribute to more meaningful comparisons of climate model results and paleoclimate reconstructions, ultimately aiding the understanding of mechanisms of climate change.
Code and data availability
Code and data availability.
All model data can be obtained from the PANGAEA database (https://doi.pangaea.de/10.1594/PANGAEA.892469, Kretschmer et al., 2018). The model code is available upon request from the corresponding author (Kerstin Kretschmer, kkretschmer@marum.de).
Supplement
Supplement.
Author contributions
Author contributions.
KK, MK, and MS designed the study. KK developed the model code and designed and finalized the model experiment. KK and LJ analyzed and interpreted the data. All authors contributed to writing the paper.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
This paper has benefited from the constructive comments and suggestions of Inge van Dijk, Jelle Bijma, and two anonymous reviewers, as well as of the handling associated editor Lennart de Nooijer. We are grateful to Graham Mortyn for providing the plankton tow data from the Atlantic sector of the Southern Ocean. We would like to thank Gerlinde Jung and Jeroen Groeneveld for their helpful advice. The model integration has been performed at the North-German Supercomputing Alliance (HLRN). This project was supported by the DFG (Deutsche Forschungsgemeinschaft) through the International Research Training Group IRTG 1904 ArcTrain and the DFG Research Center/Cluster of Excellence “The Ocean in the Earth System”.
The article processing charges for this open-access
publication were covered by the University of Bremen.
Edited by: Lennart de Nooijer
Reviewed by: Jelle Bijma, Inge van Dijk,
and two anonymous referees
References
Anand, P., Elderfield, H., and Conte, M. H.: Calibration of Mg/Ca thermometry in planktonic foraminifera from a sediment trap time series, Paleoceanography, 18, 1050, https://doi.org/10.1029/2002PA000846, 2003. a
Asahi, H. and Takahashi, K.: A 9-year time-series of planktonic foraminifer fluxes and environmental change in the Bering sea and the central subarctic Pacific Ocean, 1990-1999, Prog. Oceanogr., 72, 343–363, https://doi.org/10.1016/j.pocean.2006.03.021, 2007. a, b
Bauch, D., Carstens, J., and Wefer, G.: Oxygen isotope composition of living Neogloboquadrina pachyderma (sin.) in the Arctic Ocean, Earth Planet. Sc. Lett., 146, 47–58, https://doi.org/10.1016/S0012-821X(96)00211-7, 1997. a
Bé, A. W. H.: Some observations on Arctic planktonic foraminifera, Contrib. Cushman Found. Foram. Res., 11, 64–68, 1960. a
Bé, A. W. H.: Planktonic Foraminifera, in: Antarctic Map Folio Series 11: Distribution of Selected Groups of Marine Invertebrates in Waters South of 35 S Latitude, 9–12, American Geographical Society, 1969. a, b
Bé, A. W. H. and Hamlin, W. H.: Ecology of Recent Planktonic Foraminifera: Part 3: Distribution in the North Atlantic during the Summer of 1962, Micropaleontology, 13, 87–106, 1967. a
Bé, A. W. H. and Hutson, W. H.: Ecology of Planktonic Foraminifera and Biogeographic Patterns of Life and Fossil Assemblages in the Indian Ocean, Micropaleontology, 23, 369–414, https://doi.org/10.2307/1485406, 1977. a, b, c, d
Bé, A. W. H. and Tolderlund, D. S.: Distribution and ecology of planktonic foraminifera in surface waters of the Atlantic and Indian Oceans, in: The Micropaleontology of Oceans, edited by: Funnell, B. and Riedel, W., 105–150, Cambridge University Press, London, 1971. a, b, c, d, e, f, g
Beckmann, W., Auras, A., and Hemleben, C.: Cyclonic cold-core eddy in the eastern North Atlantic. III. Zooplankton, Marine Ecology – Progress Series, 39, 165–173, https://doi.org/10.3354/meps039165, 1987. a
Bergami, C., Capotondi, L., Langone, L., Giglio, F., and Ravaioli, M.: Distribution of living planktonic foraminifera in the Ross Sea and the Pacific sector of the Southern Ocean (Antarctica), Mar. Micropaleontol., 73, 37–48, https://doi.org/10.1016/j.marmicro.2009.06.007, 2009. a, b, c, d
Bijma, J.: Lunar pulses of carbonate output by spinose planktonic foraminifera, in: Protozoa and Their Role in Marine Processes. NATO ASI Series G: Ecological Sciences, edited by: Reid, P. C., Turley, C. M., and Burkill, P. H., 353–354, Springer, Berlin, Heidelberg, 1991. a
Bijma, J. and Hemleben, C.: Population dynamics of the planktic foraminifer Globigerinoides sacculifer (Brady) from the central Red Sea, Deep-Sea Res. Pt. I, 41, 485–510, 1994. a
Bijma, J., Erez, J., and Hemleben, C.: Lunar and semi-lunar reproductive cycles in some spinose planktonic foraminifers, J. Foramin. Res., 20, 117–127, 1990a. a, b
Bijma, J., Faber Jr., W. W., and Hemleben, C.: Temperature and salinity limits for growth and survival of some planktonic foraminifers in laboratory cultures, J. Foramin. Res., 20, 95–116, 1990b. a, b
Bijma, J., Hemleben, C., and Wellnitz, K.: Lunar-influenced carbonate flux of the planktic foraminifer Globigerinoides sacculifer (Brady) from the central Red Sea, Deep-Sea Res. Pt. I, 41, 511–530, 1994. a
Bird, C., Darling, K. F., Russell, A. D., Davis, C. V., Fehrenbacher, J., Free, A., Wyman, M., and Ngwenya, B. T.: Cyanobacterial endobionts within a major marine planktonic calcifier (Globigerina bulloides, Foraminifera) revealed by 16S rRNA metabarcoding, Biogeosciences, 14, 901–920, https://doi.org/10.5194/bg-14-901-2017, 2017. a
Boltovskoy, E.: Planktonic foraminiferal assemblages of the epipelagic zone and their thanatocoenoses, in: The Micropalaeontology of Oceans, edited by: Funnell, B. M. and Riedel, W. R., 277–288, Cambridge University Press, London, 1971. a
Boltovskoy, E., Boltovskoy, D., Correa, N., and Brandini, F.: Planktic foraminifera from the southwestern Atlantic (30–60 S): species-specific patterns in the upper 50 m, Mar. Micropaleontol., 28, 53–72, https://doi.org/10.1016/0377-8398(95)00076-3, 1996. a
Bradshaw, J. S.: Ecology of living foraminifera in the North and Equatorial Pacific Ocean, Cushman Foundation for Foraminiferal Research: Contributions, 10, 25–64, 1959. a
Caron, D. A., Bé, A. W. H., and Anderson, O. R.: Effects of variations in light intensity on life processes of the planktonic foraminifer Globigerinoides sacculifer in laboratory culture, Journal of the Marine Biological Association of the United Kingdom, 62, 435–451, https://doi.org/10.1017/S0025315400057374, 1982. a
Caron, D. A., Faber, W. W., and Bé, A. W. H.: Effects of temperature and salinity on the growth and survival of the planktonic foraminifer Globigerinoides sacculifer, Journal of the Marine Biological Association of the United Kingdom, 67, 323–341, https://doi.org/10.1017/S0025315400026643, 1987. a
Carstens, J. and Wefer, G.: Recent distribution of planktonic foraminifera in the Nansen Basin, Arctic Ocean, Deep-Sea Res., 39, S507–S524, 1992. a, b
Colebrook, J. M.: Continuous Plankton Records: Seasonal Cycles of Phytoplankton and Copepods in the North Atlantic Ocean and the North Sea, Mar. Biol., 51, 23–32, https://doi.org/10.1016/0011-7471(66)90628-0, 1979. a
Colebrook, J. M.: Continuous plankton records: seasonal variations in the distribution and abundance of plankton in the North Atlantic Ocean and the North Sea, J. Plankton Res., 4, 435–462, 1982. a
Craig, A. P., Vertenstein, M., and Jacob, R.: A New Flexible Coupler for Earth System Modeling developed for CCSM4 and CESM1, Int. J. High Perform. C., 26, 31–42, https://doi.org/10.1177/1094342011428141, 2012. a
Curry, W. B., Ostermann, D. R., Guptha, M. V. S., and Ittekkot, V.: Foraminiferal production and monsoonal upwelling in the Arabian Sea: evidence from sediment traps, Geological Society, London, Special Publications, 64, 93–106, https://doi.org/10.1144/GSL.SP.1992.064.01.06, 1992. a
Danabasoglu, G., Bates, S. C., Briegleb, B. P., Jayne, S. R., Jochum, M., Large, W. G., Peacock, S., and Yeager, S. G.: The CCSM4 Ocean Component, J. Climate, 25, 1361–1389, https://doi.org/10.1175/JCLI-D-11-00091.1, 2012. a, b, c
Danabasoglu, G., Yeager, S. G., Bailey, D., Behrens, E., Bentsen, M., Bi, D., Biastoch, A., Böning, C., Bozec, A., Canuto, V. M., Cassou, C., Chassignet, E., Coward, A. C., Danilov, S., Diansky, N., Drange, H., Farneti, R., Fernandez, E., Fogli, P. G., Forget, G., Fujii, Y., Griffies, S. M., Gusev, A., Heimbach, P., Howard, A., Jung, T., Kelley, M., Large, W. G., Leboissetier, A., Lu, J., Madec, G., Marsland, S. J., Masina, S., Navarra, A., Nurser, A. J. G., Pirani, A., Salas y Mélia, D., Samuels, B. L., Scheinert, M., Sidorenko, D., Treguier, A.-M., Tsujino, H., Uotila, P., Valcke, S., Voldoire, A., and Wang, Q.: North Atlantic simulations in Coordinated Ocean-ice Reference Experiments phase II (CORE-II). Part I: Mean states, Ocean Model., 73, 76–107, https://doi.org/10.1016/j.ocemod.2013.10.005, 2014. a, b
Darling, K. F., Kucera, M., Pudsey, C. J., and Wade, C. M.: Molecular evidence links cryptic diversification in polar planktonic protists to Quaternary Clim. Dynam., P. Natl. Acad. Sci. USA, 101, 7657–7662, 2004. a
Darling, K. F., Kucera, M., Kroon, D., and Wade, C. M.: A resolution for the coiling direction paradox in Neogloboquadrina pachyderma, Paleoceanography, 21, PA2011, https://doi.org/10.1029/2005PA001189, 2006. a, b, c, d
Darling, K. F., Kucera, M., and Wade, C. M.: Global molecular phylogeography reveals persistent Arctic circumpolar isolation in a marine planktonic protist, P. Natl. Acad. Sci. USA, 104, 5002–5007, https://doi.org/10.1073/pnas.0700520104, 2007. a
Dee, S., Emile-Geay, J., Evans, M. N., Allam, A., Steig, E. J., and Thompson, D. M.: PRYSM: An open-source framework for PRoxY System Modeling, with applications to oxygen-isotope systems, J. Adv. Model. Earth Sy., 7, 1220–1247, https://doi.org/10.1002/2015MS000447, 2015. a
Dee, S. G., Steiger, N. J., Emile-Geay, J., and Hakim, G. J.: On the utility of proxy system models for estimating climate states over the common era, J. Adv. Model. Earth Sy., 8, 1164–1179, https://doi.org/10.1002/2016MS000677, 2016. a
Deuser, W. G., Ross, E. H., Hemleben, C., and Spindler, M.: Seasonal changes in species composition, numbers, mass, size, and isotopic composition of planktonic foraminifera settling into the deep Sargasso Sea, Palaeogeogr. Palaeocl., 33, 103–127, https://doi.org/10.1016/0031-0182(81)90034-1, 1981. a
Doney, S. C., Glover, D. M., and Najjar, R. G.: A new coupled, one-dimensional biological-physical model for the upper ocean: Applications to the JGOFS Bermuda Atlantic Time-series Study (BATS) site, Deep-Sea Res. Pt. II, 43, 591–624, https://doi.org/10.1016/0967-0645(95)00104-2, 1996. a, b
Doney, S. C., Lindsay, K., Fung, I., and John, J.: Natural Variability in a Stable, 1000-Yr Global Coupled Climate-Carbon Cycle Simulation, J. Climate, 19, 3033–3054, https://doi.org/10.1175/JCLI3783.1, 2006. a
Donner, B. and Wefer, G.: Flux and sTable isotope composition of Neogloboquadrina pachyderma and other planktonic foraminifers in the Southern Ocean (Atlantic sector), Deep-Sea Res. Pt. I, 41, 1733–1743, 1994. a, b
Duplessy, J.-C., Blanc, P.-L., and Bé, A. W. H.: Oxygen-18 Enrichment of Planktonic Foraminifera Due to Gametogenic Calcification Below the Euphotic Zone, Science, 213, 1247–1250, 1981. a, b
Eguchi, N. O., Kawahata, H., and Taira, A.: Seasonal Response of Planktonic Foraminifera to Surface Ocean Condition: Sediment Trap Results from the Central North Pacific Ocean, J. Oceanogr., 55, 681–691, 1999. a
Eguchi, N. O., Ujiié, H., Kawahata, H., and Taira, A.: Seasonal variations in planktonic foraminifera at three sediment traps in the Subarctic, Transition and Subtropical zones of the central North Pacific Ocean, Mar. Micropaleontol., 48, 149–163, https://doi.org/10.1016/S0377-8398(03)00020-3, 2003. a
Erez, J.: Calcification Rates, Photosynthesis and Light in Planktonic Foraminifera, in: Biomineralization and Biological Metal Accumulation: Biological and Geological Perspectives Papers presented at the Fourth International Symposium on Biomineralization, Renesse, the Netherlands, 2–5 June 1982, edited by: Westbroek, P. and de Jong, E. W., 307–312, Springer Netherlands, Dordrecht, the Netherlands, https://doi.org/10.1007/978-94-009-7944-4_29, 1983. a
Evans, M. N., Tolwinski-Ward, S. E., Thompson, D. M., and Anchukaitis, K. J.: Applications of proxy system modeling in high resolution paleoclimatology, Quaternary Sci. Rev., 76, 16–28, https://doi.org/10.1016/j.quascirev.2013.05.024, 2013. a
Fairbanks, R. G. and Wiebe, P. H.: Foraminifera and Chlorophyll Maximum: Vertical Distribution, Seasonal Succession, and Paleoceanographic Significance, Science, 209, 1524–1526, 1980. a, b, c, d, e, f
Fairbanks, R. G., Wiebe, P. H., and Bé, A. W. H.: Vertical Distribution and Isotopic Composition of Living Planktonic Foraminifera in the Western North Atlantic, Science, 207, 61–63, 1980. a, b, c, d, e
Fairbanks, R. G., Sverdlove, M., Free, R., Wiebe, P. H., and Bé, A. W. H.: Vertical distribution and isotopic fractionation of living planktonic foraminifera from the Panama Basin, Nature, 298, 841–844, 1982. a, b, c, d, e, f, g
Field, D. B.: Variability in vertical distributions of planktonic foraminifera in the California Current: Relationships to vertical ocean structure, Paleoceanography, 19, PA2014, https://doi.org/10.1029/2003PA000970, 2004. a, b, c, d, e, f, g, h, i, j, k
Fraile, I., Schulz, M., Mulitza, S., and Kucera, M.: Predicting the global distribution of planktonic foraminifera using a dynamic ecosystem model, Biogeosciences, 5, 891–911, https://doi.org/10.5194/bg-5-891-2008, 2008. a, b, c, d, e, f, g, h, i, j, k, l
Fraile, I., Mulitza, S., and Schulz, M.: Modeling planktonic foraminiferal seasonality: Implications for sea-surface temperature reconstructions, Mar. Micropaleontol., 72, 1–9, https://doi.org/10.1016/j.marmicro.2009.01.003, 2009a. a, b
Fraile, I., Schulz, M., Mulitza, S., Merkel, U., Prange, M., and Paul, A.: Modeling the seasonal distribution of planktonic foraminifera during the Last Glacial Maximum, Paleoceanography, 24, PA2216, https://doi.org/10.1029/2008PA001686, 2009b. a, b
Friedland, K. D., Record, N. R., Asch, R. G., Kristiansen, T., Saba, V. S., Drinkwater, K. F., Henson, S., Leaf, R. T., Morse, R. E., Johns, D. G., Large, S. I., Hjøllo, S. S., Nye, J. A., Alexander, M. A., and Ji, R.: Seasonal phytoplankton blooms in the North Atlantic linked to the overwintering strategies of copepods, Elementa: Science of the Anthropocene, 4, 000099, https://doi.org/10.12952/journal.elementa.000099, 2016. a
Ganssen, G. M. and Kroon, D.: The isotopic signature of planktonic foraminifera from NE Atlantic surface sediments: implications for the reconstruction of past oceanic conditions, J. Geol. Soc. London, 157, 693–699, https://doi.org/10.1144/jgs.157.3.693, 2000. a, b
Garcia, H. E., Locarnini, R. A., Boyer, T. P., Antonov, J. I., Zweng, M. M., Baranova, O. K., and Johnson, D. R.: World Ocean Atlas 2009, Volume 4: Nutrients (phosphate, nitrate, silicate), U.S. Government Printing Office, Washington, DC, https://doi.org/10.1182/blood-2011-06-357442, 2010. a
Gastrich, M. D.: Ultrastructure of a new intracellular symbiotic alga found within planktonic foraminifera, J. Phycol., 23, 623–632, https://doi.org/10.1111/j.1529-8817.1987.tb04215.x, 1987. a
Gastrich, M. D. and Bartha, R.: Primary productivity in the planktonic foraminifer Globigerinoides ruber (D'Orbigny), J. Foramin. Res., 18, 137–142, 1988. a
Geider, R. J., Maclntyre, H. L., and Kana, T. M.: A dynamic regulatory model of phytoplanktonic acclimation to light, nutrients, and temperature, Limnol. Oceanography, 43, 679–694, 1998. a
Giraudeau, J.: Planktonic foraminiferal assemblages in surface sediments from the south western African continental margin, Mar. Geol., 110, 47–62, https://doi.org/10.1016/0025-3227(93)90104-4, 1993. a, b
Goosse, H., Crespin, E., de Montety, A., Mann, M. E., Renssen, H., and Timmermann, A.: Reconstructing surface temperature changes over the past 600 years using climate model simulations with data assimilation, J. Geophys. Res., 115, D09108, https://doi.org/10.1029/2009JD012737, 2010. a
Griffies, S. M., Biastoch, A., Böning, C., Bryan, F., Danabasoglu, G., Chassignet, E. P., England, M. H., Gerdes, R., Haak, H., Hallberg, R. W., Hazeleger, W., Jungclaus, J., Large, W. G., Madec, G., Pirani, A., Samuels, B. L., Scheinert, M., Gupta, A. S., Severijns, C. A., Simmons, H. L., Treguier, A. M., Winton, M., Yeager, S., and Yin, J.: Coordinated Ocean-ice Reference Experiments (COREs), Ocean Model., 26, 1–46, https://doi.org/10.1016/j.ocemod.2008.08.007, 2009. a
Hakim, G. J., Emile-Geay, J., Steig, E. J., Noone, D., Anderson, D. M., Tardif, R., Steiger, N., and Perkins, W. A.: The last millennium climate reanalysis project: Framework and first results, J. Geophys. Res.-Atmos., 121, 6745–6764, https://doi.org/10.1002/2016JD024751, 2016. a
Hemleben, C. and Bijma, J.: Foraminiferal Population Dynamics And STable Carbon Isotopes, in: Carbon Cycling in the Glacial Ocean: Constraints on the Ocean's Role in Global Change, edited by: Zahn, R., Pedersen, T. F., Kaminski, M. A., and Labeyrie, L., 145–166, Springer, Berlin, Heidelberg, 1994. a
Hemleben, C., Spindler, M., and Anderson, O. R.: Modern Planktonic Foraminifera, Springer, New York, 1989. a, b
Holland, M. M., Bailey, D. A., Briegleb, B. P., Light, B., and Hunke, E.: Improved Sea Ice Shortwave Radiation Physics in CCSM4: The Impact of Melt Ponds and Aerosols on Arctic Sea Ice, J. Climate, 25, 1413–1430, https://doi.org/10.1175/JCLI-D-11-00078.1, 2012. a
Hunke, E. C. and Lipscomb, W. H.: CICE: The Los Alamos Sea Ice Model, Documentation and Software User's Manual, version 4.0, Tech. rep., Los Alamos National Laboratory, 2008. a
Hurrell, J. W., Holland, M. M., Gent, P. R., Ghan, S., Kay, J. E., Kushner, P. J., Lamarque, J.-F., Large, W. G., Lawrence, D., Lindsay, K., Lipscomb, W. H., Long, M. C., Mahowald, N., Marsh, D. R., Neale, R. B., Rasch, P., Vavrus, S., Vertenstein, M., Bader, D., Collins, W. D., Hack, J. J., Kiehl, J., and Marshall, S.: The Community Earth System Model: A framework for collaborative research, B. Am. Meteorol. Soc., 94, 1339–1360, https://doi.org/10.1175/BAMS-D-12-00121.1, 2013. a, b
Husum, K. and Hald, M.: Arctic planktic foraminiferal assemblages: Implications for subsurface temperature reconstructions, Mar. Micropaleontol., 96–97, 38–47, https://doi.org/10.1016/j.marmicro.2012.07.001, 2012. a
Iwasaki, S., Kimoto, K., Kuroyanagi, A., and Kawahata, H.: Horizontal and vertical distributions of planktic foraminifera in the subarctic Pacific, Mar. Micropaleontol., 130, 1–14, https://doi.org/10.1016/j.marmicro.2016.12.001, 2017. a, b, c, d, e, f
Jodłowska, S. and Śliwińska, S.: Effects of light intensity and temperature on the photosynthetic irradiance response curves and chlorophyll fluorescence in three picocyanobacterial strains of Synechococcus, Photosynthetica, 52, 223–232, https://doi.org/10.1007/s11099-014-0024-y, 2014. a, b
Jonkers, L. and Kucera, M.: Global analysis of seasonality in the shell flux of extant planktonic Foraminifera, Biogeosciences, 12, 2207–2226, https://doi.org/10.5194/bg-12-2207-2015, 2015. a, b, c, d, e, f, g, h, i, j, k, l, m
Jonkers, L. and Kucera, M.: Quantifying the effect of seasonal and vertical habitat tracking on planktonic foraminifera proxies, Clim. Past, 13, 573–586, https://doi.org/10.5194/cp-13-573-2017, 2017. a, b
Jonkers, L., Brummer, G.-J. A., Peeters, F. J. C., van Aken, H. M., and De Jong, M. F.: Seasonal stratification, shell flux, and oxygen isotope dynamics of left-coiling N. pachyderma and T. quinqueloba in the western subpolar North Atlantic, Paleoceanography, 25, PA2204, https://doi.org/10.1029/2009PA001849, 2010. a
Jonkers, L., van Heuven, S., Zahn, R., and Peeters, F. J. C.: Seasonal patterns of shell flux, δ18O and δ13C of small and large N. pachyderma (s) and G. bulloides in the subpolar North Atlantic, Paleoceanography, 28, 164–174, https://doi.org/10.1002/palo.20018, 2013. a, b
Jørgensen, B. B., Erez, J., Revsbech, N. P., and Cohen, Y.: Symbiotic photosynthesis in a planktonic foraminiferan, Globigerinoides sacculifer (Brady), studied with microelectrodes, Limnol. Oceanogr., 30, 1253–1267, https://doi.org/10.4319/lo.1985.30.6.1253, 1985. a, b, c
Kemle-von Mücke, S. and Oberhänsli, H.: The Distribution of Living Planktic Foraminifera in Relation to Southeast Atlantic Oceanography, in: Use of Proxies in Paleoceanography, 91–115, Springer, Berlin, Heidelberg, 1999. a, b
Key, R. M., Kozyr, A., Sabine, C. L., Lee, K., Wanninkhof, R., Bullister, J. L., Feely, R. A., Millero, F. J., Mordy, C., and Peng, T.-H.: A global ocean carbon climatology: Results from Global Data Analysis Project (GLODAP), Global Biogeochem. Cy., 18, GB4031, https://doi.org/10.1029/2004GB002247, 2004. a
Kincaid, E., Thunell, R. C., Le, J., Lange, C. B., Weinheimer, A. L., and Reid, F. M. H.: Planktonic foraminiferal fluxes in the Santa Barbara Basin: response to seasonal and interannual hydrographic changes, Deep-Sea Res. Pt. II, 47, 1157–1176, https://doi.org/10.1016/S0967-0645(99)00140-X, 2000. a, b
Kohfeld, K. E., Fairbanks, R. G., Smith, S. L., and Walsh, I. D.: Neogloboquadrina pachyderma (sinistral coiling) as paleoceanographic tracers in polar oceans: Evidence from Northeast Water Polynya plankton tows, sediment traps, and surface sediments, Paleoceanography, 11, 679–699, 1996. a, b, c, d, e, f, g, h, i
Kretschmer, K., Kucera, M., and Schulz, M.: Modeling the distribution and seasonality of Neogloboquadrina pachyderma in the North Atlantic Ocean during Heinrich Stadial 1, Paleoceanography, 31, 986–1010, https://doi.org/10.1002/2015PA002819, 2016. a, b, c, d, e, f, g, h, i, j
Kretschmer, K., Jonkers, L., Kucera, M., and Schulz, M.: Investigation of the seasonal and vertical habitats of planktonic foraminifera using an ecosystem modeling approach, PANGAEA, https://doi.pangaea.de/10.1594/PANGAEA.892469, 2018.
Krishnamurthy, A., Moore, J. K., Zender, C. S., and Luo, C.: Effects of atmospheric inorganic nitrogen deposition on ocean biogeochemistry, J. Geophys. Res., 112, G02019, https://doi.org/10.1029/2006JG000334, 2007. a
Kucera, M.: Planktonic Foraminifera as Tracers of Past Oceanic Environments, in: Proxies in Late Cenozoic Paleoceanography, edited by Hillaire-Marcel, C. and de Vernal, A., chap. 6, 213–262, Elsevier, Amsterdam, https://doi.org/10.1016/S1572-5480(07)01011-1, 2007. a
Kucera, M. and Darling, K. F.: Cryptic species of planktonic foraminifera: their effect on palaeoceanographic reconstructions, Philos. T. R. Soc. Lond., 360, 695–718, 2002. a
Kucera, M., Weinelt, M., Kiefer, T., Pflaumann, U., Hayes, A., Weinelt, M., Chen, M.-T., Mix, A. C., Barrows, T. T., Cortijo, E., Duprat, J., Juggins, S., and Waelbroeck, C.: Reconstruction of sea-surface temperatures from assemblages of planktonic foraminifera: multi-technique approach based on geographically constrained calibration data sets and its application to glacial Atlantic and Pacific Oceans, Quaternary Sci. Rev., 24, 951–998, https://doi.org/10.1016/j.quascirev.2004.07.014, 2005. a, b
Kuroyanagi, A. and Kawahata, H.: Vertical distribution of living planktonic foraminifera in the seas around Japan, Mar. Micropaleontol., 53, 173–196, https://doi.org/10.1016/j.marmicro.2004.06.001, 2004. a, b, c, d, e, f, g, h, i, j, k, l, m, n
Kuroyanagi, A., Kawahata, H., Nishi, H., and Honda, M. C.: Seasonal changes in planktonic foraminifera in the northwestern North Pacific Ocean: sediment trap experiments from subarctic and subtropical gyres, Deep-Sea Res. Pt. II, 49, 5627–5645, https://doi.org/10.1016/S0967-0645(02)00202-3, 2002. a
Large, W. G. and Yeager, S. G.: Diurnal to Decadal Global Forcing For Ocean and Sea-Ice Models: The Data Sets and Flux Climatologies, Tech. Rep. May, NCAR Technical Note NCAR/TN-460+STR, 2004. a
Large, W. G. and Yeager, S. G.: The global climatology of an interannually varying air–sea flux data set, Clim. Dynam., 33, 341–364, https://doi.org/10.1007/s00382-008-0441-3, 2009. a
Lindsay, K., Bonan, G. B., Doney, S. C., Hoffman, F. M., Lawrence, D. M., Long, M. C., Mahowald, N. M., Moore, J. K., Randerson, J. T., and Thornton, P. E.: Preindustrial-Control and Twentieth-Century Carbon Cycle Experiments with the Earth System Model CESM1(BGC), J. Climate, 27, 8981–9005, https://doi.org/10.1175/JCLI-D-12-00565.1, 2014. a, b, c
Lombard, F., Labeyrie, L., Michel, E., Bopp, L., Cortijo, E., Retailleau, S., Howa, H., and Jorissen, F.: Modelling planktic foraminifer growth and distribution using an ecophysiological multi-species approach, Biogeosciences, 8, 853–873, https://doi.org/10.5194/bg-8-853-2011, 2011. a, b
Luo, C., Mahowald, N. M., and del Corral, J.: Sensitivity study of meteorological parameters on mineral aerosol mobilization, transport, and distribution, J. Geophys. Res., 108, 4447, https://doi.org/10.1029/2003JD003483, 2003. a
Mann, K. H. and Lazier, J. R. N.: Dynamics of Marine Ecosystems: Biological-Physical Interactions in the Oceans, Blackwell Publishing Ltd, 2nd edn., 1996. a
Mix, A. C.: The oxygen-isotope record of glaciation, in: The Geology of North America, chap. 6, 111–135, The Geological Society of America, 1987. a
Mohiuddin, M. M., Nishimura, A., Tanaka, Y., and Shimamoto, A.: Regional and interannual productivity of biogenic components and planktonic foraminiferal fluxes in the northwestern Pacific Basin, Mar. Micropaleontol., 45, 57–82, https://doi.org/10.1016/S0377-8398(01)00045-7, 2002. a, b, c
Mohiuddin, M. M., Nishimura, A., Tanaka, Y., and Shimamoto, A.: Seasonality of biogenic particle and planktonic foraminifera fluxes: Response to hydrographic variability in the Kuroshio Extension, northwestern Pacific Ocean, Deep-Sea Res. Pt. I, 51, 1659–1683, https://doi.org/10.1016/j.dsr.2004.06.002, 2004. a, b, c, d, e
Mohiuddin, M. M., Nishimura, A., and Tanaka, Y.: Seasonal succession, vertical distribution, and dissolution of planktonic foraminifera along the Subarctic Front: Implications for paleoceanographic reconstruction in the northwestern Pacific, Mar. Micropaleontol., 55, 129–156, https://doi.org/10.1016/j.marmicro.2005.02.007, 2005. a, b, c
Moore, J. K. and Braucher, O.: Sedimentary and mineral dust sources of dissolved iron to the world ocean, Biogeosciences, 5, 631–656, https://doi.org/10.5194/bg-5-631-2008, 2008. a
Moore, J. K., Doney, S. C., Glover, D. M., and Fung, I. Y.: Iron cycling and nutrient-limitation patterns in surface waters of the World Ocean, Deep-Sea Res. Pt. II, 49, 463–507, 2002a. a
Moore, J. K., Doney, S. C., Kleypas, J. A., Glover, D. M., and Fung, I. Y.: An intermediate complexity marine ecosystem model for the global domain, Deep-Sea Res. Pt. II, 49, 403–462, 2002b. a, b, c, d, e
Moore, J. K., Doney, S. C., and Lindsay, K.: Upper ocean ecosystem dynamics and iron cycling in a global three-dimensional model, Global Biogeochem. Cy., 18, GB4028, https://doi.org/10.1029/2004GB002220, 2004. a, b, c, d, e, f, g, h
Moore, J. K., Doney, S. C., Lindsay, K., Mahowald, N., and Michaels, A. F.: Nitrogen fixation amplifies the ocean biogeochemical response to decadal timescale variations in mineral dust deposition, Tellus, 58B, 560–572, https://doi.org/10.1111/j.1600-0889.2006.00209.x, 2006. a
Moore, J. K., Lindsay, K., Doney, S. C., Long, M. C., and Misumi, K.: Marine Ecosystem Dynamics and Biogeochemical Cycling in the Community Earth System Model [CESM1(BGC)]: Comparison of the 1990s with the 2090s under the RCP4.5 and RCP8.5 Scenarios, J. Climate, 26, 9291–9312, https://doi.org/10.1175/JCLI-D-12-00566.1, 2013. a, b, c, d, e, f, g
Morard, R., Quillévéré, F., Escarguel, G., de Garidel-Thoron, T., de Vargas, C., and Kucera, M.: Ecological modeling of the temperature dependence of cryptic species of planktonic Foraminifera in the Southern Hemisphere, Palaeogeogr. Palaeocl., 391, 13–33, https://doi.org/10.1016/j.palaeo.2013.05.011, 2013. a, b
Mortyn, P. G. and Charles, C. D.: Planktonic foraminiferal depth habitat and δ18O calibrations: Plankton tow results from the Atlantic sector of the Southern Ocean, Paleoceanography, 18, 1037, https://doi.org/10.1029/2001PA000637, 2003. a, b, c, d, e, f, g
Mulitza, S., Wolff, T., Pätzold, J., Hale, W., and Wefer, G.: Temperature sensitivity of planktic foraminifera and its influence on the oxygen isotope record, Mar. Micropaleontol., 33, 223–240, 1998. a
Naidu, P. D. and Malmgren, B. A.: A high-resolution record of late Quaternary upwelling along the Oman Margin, Arabian Sea based on planktonic foraminifera, Paleoceanography, 11, 129–140, 1996. a, b
Northcote, L. C. and Neil, H. L.: Seasonal variations in foraminiferal flux in the Southern Ocean, Campbell Plateau, New Zealand, Mar. Micropaleontol., 56, 122–137, https://doi.org/10.1016/j.marmicro.2005.05.001, 2005. a, b
Nyland, B. F., Jansen, E., Elderfield, H., and Andersson, C.: Neogloboquadrina pachyderma (dex. and sin.) Mg/Ca and δ18O records from the Norwegian Sea, Geochem. Geophy. Geosy., 7, Q10P17, https://doi.org/10.1029/2005GC001055, 2006. a
Ortiz, J. D., Mix, A. C., and Collier, R. W.: Environmental control of living symbiotic and asymbiotic foraminifera of the California Current, Paleoceanography, 10, 987–1009, 1995. a, b, c, d
Ortiz, J. D., Mix, A. C., Rugh, W., Watkins, J. M., and Collier, R. W.: Deep-dwelling planktonic foraminifera of the northeastern Pacific Ocean reveal environmental control of oxygen and carbon isotopic disequilibria, Geochim. Cosmochim. Ac., 60, 4509–4523, 1996. a
Pados, T. and Spielhagen, R. F.: Species distribution and depth habitat of recent planktic foraminifera in Fram Strait, Arctic Ocean, Polar Res., 33, 22483, https://doi.org/10.3402/polar.v33.22483, 2014. a, b, c, d
Peeters, F. J. C. and Brummer, G.-J. A.: The seasonal and vertical distribution of living planktic foraminifera in the NW Arabian Sea, Geological Society, London, Special Publications, 195, 463–497, https://doi.org/10.1144/GSL.SP.2002.195.01.26, 2002. a, b, c, d, e, f, g, h, i, j, k
Pflaumann, U., Duprat, J., Pujol, C., and Labeyrie, L. D.: SIMMAX: A modern analog technique to deduce Atlantic sea surface temperatures from planktonic foraminifera in deep-sea sediments, Paleoceanography, 11, 15–35, 1996. a, b
Pflaumann, U., Sarnthein, M., Chapman, M., de Abreu, L., Funnell, B., Huels, M., Kiefer, T., Maslin, M., Schulz, H., Swallow, J., van Kreveld, S., Vautravers, M., Vogelsang, E., and Weinelt, M.: Glacial North Atlantic: Sea-surface conditions reconstructed by GLAMAP 2000, Paleoceanography, 18, 1065, https://doi.org/10.1029/2002PA000774, 2003. a, b
Pollard, D. and Schulz, M.: A model for the potential locations of Triassic evaporite basins driven by paleoclimatic GCM simulations, Global Planet. Change, 9, 233–249, 1994. a
Prell, W. L., Martin, A., Cullen, J. L., and Trend, M.: The Brown University Foraminiferal Data Base, IGBP PAGES/World Data Center-A for Paleoclimatology, Data Contribution Series # 1999-027, NOAA/NGDC Paleoclimatology Program, Boulder CO, USA, available at: https://www.ncdc.noaa.gov/paleo/metadata/noaa-ocean-5908.html, 1999. a, b
Rebotim, A., Voelker, A. H. L., Jonkers, L., Waniek, J. J., Meggers, H., Schiebel, R., Fraile, I., Schulz, M., and Kucera, M.: Factors controlling the depth habitat of planktonic foraminifera in the subtropical eastern North Atlantic, Biogeosciences, 14, 827–859, https://doi.org/10.5194/bg-14-827-2017, 2017. a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q
Rippert, N., Nürnberg, D., Raddatz, J., Maier, E., Hathorne, E., Bijma, J., and Tiedemann, R.: Constraining foraminiferal calcification depths in the western Pacific warm pool, Mar. Micropaleontol., 128, 14–27, https://doi.org/10.1016/j.marmicro.2016.08.004, 2016. a, b
Salmon, K. H., Anand, P., Sexton, P. F., and Conte, M.: Upper ocean mixing controls the seasonality of planktonic foraminifer fluxes and associated strength of the carbonate pump in the oligotrophic North Atlantic, Biogeosciences, 12, 223–235, https://doi.org/10.5194/bg-12-223-2015, 2015. a, b, c, d, e
Schiebel, R., Bijma, J., and Hemleben, C.: Population dynamics of the planktic foraminifer Globigerina bulloides from the eastern North Atlantic, Deep-Sea Res. Pt. I, 44, 1701–1713, 1997. a, b
Schiebel, R., Waniek, J., Bork, M., and Hemleben, C.: Planktic foraminiferal production stimulated by chlorophyll redistribution and entrainment of nutrients, Deep-Sea Re. Pt I, 48, 721–740, 2001. a, b, c, d, e
Schiebel, R., Waniek, J., Zeltner, A., and Alves, M.: Impact of the Azores Front on the distribution of planktic foraminifers, shelled gastropods, and coccolithophorids, Deep-Sea Res. Pt. II, 49, 4035–4050, 2002. a
Schmidt, D. N., Renaud, S., Bollmann, J., Schiebel, R., and Thierstein, H. R.: Size distribution of Holocene planktic foraminifer assemblages: biogeography, ecology and adaptation, Mar. Micropaleontol., 50, 319–338, 2004. a, b
Schmidt, G. A.: Forward modeling of carbonate proxy data from planktonic foraminifera using oxygen isotope tracers in a global ocean model, Paleoceanography, 14, 482–497, https://doi.org/10.1029/1999PA900025, 1999. a
Sherman, E., Moore, J. K., Primeau, F., and Tanouye, D.: Temperature influence on phytoplankton community growth rates, Global Biogeochem. Cy., 30, 550–559, https://doi.org/10.1002/2015GB005272, 2016. a, b
Simstich, J., Sarnthein, M., and Erlenkeuser, H.: Paired δ18O signals of Neogloboquadrina pachyderma (s) and Turborotalita quinqueloba show thermal stratification structure in Nordic Seas, Mar. Micropaleontol., 48, 107–125, https://doi.org/10.1016/S0377-8398(02)00165-2, 2003. a, b, c, d
Skinner, L. C. and Elderfield, H.: Constraining ecological and biological bias in planktonic foraminiferal Mg/Ca and δ18Occ: A multispecies approach to proxy calibration testing, Paleoceanography, 20, PA1015, https://doi.org/10.1029/2004PA001058, 2005. a
Smith, R., Jones, P., Briegleb, B., Bryan, F., Danabasoglu, G., Dennis, J., Dukowicz, J., Eden, C., Fox-Kemper, B., Gent, P., Hecht, M., Jayne, S., Jochum, M., Large, W., Lindsay, K., Maltrud, M., Norton, N., Peacock, S., Vertenstein, M., and Yeager, S.: The Parallel Ocean Program (POP) reference manual: Ocean component of the Community Climate System Model (CCSM) and Community Earth System Model (CESM), Tech. rep., Los Alamos National Laboratory, 2010. a
Spezzaferri, S., Kucera, M., Pearson, P. N., Wade, B. S., Rappo, S., Poole, C. R., Morard, R., and Stalder, C.: Fossil and Genetic Evidence for the Polyphyletic Nature of the Planktonic Foraminifera “Globigerinoides”, and Description of the New Genus Trilobatus, PLoS ONE, 10, 1–20, https://doi.org/10.1371/journal.pone.0128108, 2015. a
Stangeew, E.: Distribution and Isotopic Composition of Living Planktonic Foraminifera N. pachyderma (sinistral) and T. quinqueloba in the High Latitude North Atlantic, PhD thesis, Christian-Albrechts-Universität zu Kiel, 2001. a
Steiger, N. J., Hakim, G. J., Steig, E. J., Battisti, D. S., and Roe, G. H.: Assimilation of Time-Averaged Pseudoproxies for Climate Reconstruction, J. Climate, 27, 426–441, https://doi.org/10.1175/JCLI-D-12-00693.1, 2014. a
Storz, D., Schulz, H., Waniek, J. J., Schulz-Bull, D. E., and Kučera, M.: Seasonal and interannual variability of the planktic foraminiferal flux in the vicinity of the Azores Current, Deep-Sea Res. Pt. I, 56, 107–124, https://doi.org/10.1016/j.dsr.2008.08.009, 2009. a, b, c, d
Taboada, F. G. and Anadón, R.: Seasonality of North Atlantic phytoplankton from space: Impact of environmental forcing on a changing phenology (1998–2012), Glob. Change Biol., 20, 698–712, https://doi.org/10.1111/gcb.12352, 2014. a
Thiede, J.: Distribution of foraminifera in surface waters of a coastal upwelling area, Nature, 253, 712–714, https://doi.org/10.1038/253712a0, 1975. a, b
Ting, C. S., Rocap, G., King, J., and Chisholm, S. W.: Cyanobacterial photosynthesis in the oceans: The origins and significance of divergent light-harvesting strategies, Trends Microbiol., 10, 134–142, https://doi.org/10.1016/S0966-842X(02)02319-3, 2002. a
van Raden, U. J., Groeneveld, J., Raitzsch, M., and Kucera, M.: Mg/Ca in the planktonic foraminifera Globorotalia inflata and Globigerinoides bulloides from Western Mediterranean plankton tow and core top samples, Mar. Micropaleontol., 78, 101–112, https://doi.org/10.1016/j.marmicro.2010.11.002, 2011. a
Völpel, R., Paul, A., Krandick, A., Mulitza, S., and Schulz, M.: Stable water isotopes in the MITgcm, Geosci. Model Dev., 10, 3125–3144, https://doi.org/10.5194/gmd-10-3125-2017, 2017. a
Watkins, J. M., Mix, A. C., and Wilson, J.: Living planktic foraminifera: tracers of circulation and productivity regimes in the central equatorial Pacific, Deep-Sea Res. Pt. II, 43, 1257–1282, 1996. a, b, c
Watkins, J. M., Mix, A. C., and Wilson, J.: Living planktic foraminifera in the central tropical Pacific Ocean: Articulating the equatorial 'cold tongue' during La Niña, 1992, Mar. Micropaleontol., 33, 157–174, https://doi.org/10.1016/S0377-8398(97)00036-4, 1998. a, b, c, d, e
Wilke, I., Meggers, H., and Bickert, T.: Depth habitats and seasonal distributions of recent planktic foraminifers in the Canary Islands region (29 N) based on oxygen isotopes, Deep-Sea Res. Pt. I, 56, 89–106, https://doi.org/10.1016/j.dsr.2008.08.001, 2009. a, b, c, d, e, f, g, h, i, j, k
Wolfteich, C. M.: Satellite-Derived Sea Surface Temperature, Mesoscale Variability, and Foraminiferal Production in the North Atlantic, M.Sc., Cambridge, MS, 1994. a, b, c, d
Yu, E.-F., Francois, R., Bacon, M. P., Honjo, S., Fleer, A. P., Manganini, S. J., Rutgers van der Loeff, M. M., and Ittekot, V.: Trapping efficiency of bottom-tethered sediment traps estimated from the intercepted fluxes of 230Th and 230Pa, Deep-Sea Res. Pt. I, 48, 865–889, https://doi.org/10.1016/S0967-0637(00)00067-4, 2001. a
Žarić, S., Donner, B., Fischer, G., Mulitza, S., and Wefer, G.: Sensitivity of planktic foraminifera to sea surface temperature and export production as derived from sediment trap data, Mar. Micropaleontol., 55, 75–105, https://doi.org/10.1016/j.marmicro.2005.01.002, 2005. a
Žarić, S., Schulz, M., and Mulitza, S.: Global prediction of planktic foraminiferal fluxes from hydrographic and productivity data, Biogeosciences, 3, 187–207, https://doi.org/10.5194/bg-3-187-2006, 2006. a
|
2019-04-25 19:47:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6668906807899475, "perplexity": 7721.533835295856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578733077.68/warc/CC-MAIN-20190425193912-20190425215912-00474.warc.gz"}
|
http://judge.u-aizu.ac.jp/onlinejudge/description.jsp?id=ALDS1_7_C
|
Time Limit : sec, Memory Limit : KB
# Tree Walk
Binary trees are defined recursively. A binary tree T is a structure defined on a finite set of nodes that either
• contains no nodes, or
• is composed of three disjoint sets of nodes:
- a root node.
- a binary tree called its left subtree.
- a binary tree called its right subtree.
Your task is to write a program which perform tree walks (systematically traverse all nodes in a tree) based on the following algorithms:
1. Print the root, the left subtree and right subtree (preorder).
2. Print the left subtree, the root and right subtree (inorder).
3. Print the left subtree, right subtree and the root (postorder).
Here, the given binary tree consists of n nodes and evey node has a unique ID from 0 to n-1.
## Input
The first line of the input includes an integer n, the number of nodes of the tree.
In the next n linen, the information of each node is given in the following format:
id left right
id is the node ID, left is ID of the left child and right is ID of the right child. If the node does not have the left (right) child, the left(right) is indicated by -1
## Output
In the 1st line, print "Preorder", and in the 2nd line print a list of node IDs obtained by the preorder tree walk.
In the 3rd line, print "Inorder", and in the 4th line print a list of node IDs obtained by the inorder tree walk.
In the 5th line, print "Postorder", and in the 6th line print a list of node IDs obtained by the postorder tree walk.
Print a space character before each node ID.
• 1 ≤ n ≤ 25
## Sample Input 1
9
0 1 4
1 2 3
2 -1 -1
3 -1 -1
4 5 8
5 6 7
6 -1 -1
7 -1 -1
8 -1 -1
## Sample Output 1
Preorder
0 1 2 3 4 5 6 7 8
Inorder
2 1 3 0 6 5 7 4 8
Postorder
2 3 1 6 7 5 8 4 0
## Reference
Introduction to Algorithms, Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. The MIT Press.
|
2020-01-24 21:05:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20623508095741272, "perplexity": 1040.9430179411277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250625097.75/warc/CC-MAIN-20200124191133-20200124220133-00126.warc.gz"}
|
https://www.misscarrington.com/28q0pdz3/fec186-absolute-permeability-formula
|
Darcy’s law is a particular case of the general viscous fluid flow. The relative permeability is the ratio … Knowledge of permeability is useful in a number of reservoir problems. The oil occupies the smaller pores and is in contact with the rock and the relative permeabilities are a mirror image of those in a water-wet rock. Not only can the core plugs be altered during retrieval, as discussed in Section 7.1, but the experimental saturation history may not match the reservoir saturation history. In water-wet rock, drainage is equivalent to oil migration into the originally water-filled reservoir. (2.24) μ = μ 0 ⋅ μ r For air and other non-magnetic materials, the absolute permeability is the same constant. Relative and Absolute Permeability Permeability is often expressed as: µ = µ 0 µ r It is normal to find a core material described by its relative permeability (µ r), i.e. It is a constant of proportionality that exists between magnetic induction and magnetic field intensity. CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, Difference between the Electric Field and Magnetic Field, Force on the electric charges moving in the electric and magnetic field, CBSE Previous Year Question Papers Class 10 Science, CBSE Previous Year Question Papers Class 12 Physics, CBSE Previous Year Question Papers Class 12 Chemistry, CBSE Previous Year Question Papers Class 12 Biology, ICSE Previous Year Question Papers Class 10 Physics, ICSE Previous Year Question Papers Class 10 Chemistry, ICSE Previous Year Question Papers Class 10 Maths, ISC Previous Year Question Papers Class 12 Physics, ISC Previous Year Question Papers Class 12 Chemistry, ISC Previous Year Question Papers Class 12 Biology, List of Physics Scientists and Their Inventions, $$Permittivity=\frac{Electric\;displacement}{Electric\;field\;intensity}$$, $$\varepsilon_{r} =\frac{\varepsilon }{\varepsilon _{0}}$$, $$Permeability=\frac{Magnitude\;of\;magnetic\;induction(B)}{Intensity\;of\;magnetic\;field(H)}$$$$\mu =\frac{B}{H}orB=\mu H$$, $$\mu =\frac{B (Tesla)}{H (\frac{A-t}{m})}$$, A measure of the opposition offered by any material for against the formation of an electric field. For example, core permeability is affected significantly due to the microfractures that may develop during coring or the core may be exposed to contaminants during handling. These functions below are the relative permeability of gas, oil, and water. In this case, the pore structure of the reservoir in question must be assessed through a microscopic examination. The non-linear variation of permeability is conveniently displayed as a functional plot of magnetic flux density, B, against magnetic intensity, H. Figure 2.5 illustrates a number of B-H curves for some common materials. The magnetic field constant is the scaling for the magnetic permeability. When cores are retrieved from a reservoir, the confining forces are removed and the rock can expand in all directions which can increase the dimensions of the available flow paths. Charles J. Fraser, in Mechanical Engineer's Reference Book (Twelfth Edition), 1994. Darcy used the apparatus shown in Fig. The relative permeability μ r is then defined as the ratio μ/μ 0, which is dimensionless. It is important to recognize that each method may represent rock permeability in a different scale, ranging from about an inch, as in core analysis, to thousands of feet, as in well testing. Relative permeability is the ratio of effective permeability over the absolute permeability. Relative permeability is the ratio of the effective permeability of a fluid at a given saturation to some base permeability. When permeabilities to gases are measured, corrections must be made for gas slippage which occurs when the capillary openings approach the mean free path of the gas. Figure 8.1.1 shows some permeability–irreducible water saturation correlations. Relative permeability is defined as a ration of effective permeability to an absolute permeability of rock. Under the same differential pressure, the flow rate is 0.25 cm3/sec. The concept of effective permeability is of particular importance since it emphasizes a need for production practices, which tend to maintain good permeability of the reservoir to oil. K = 1 ko T 2 S 2 0 (e3 1 + e) (5.9) where ko is a pore shape factor (~~ 2.5) The material is said to be diamagnetic if its magnetic permeability is less than μ0. Similarly, the material is said to be paramagnetic if its magnetic permeability is greater than μ0. The surface area is smaller in well-sorted rock, and therefore the irreducible water saturation is lower. If the rock contains one fluid, the rock permeability is maximum, and this value is called the absolute permeability. Figure 1.51. Vuggy reservoirs tend to have high residual oil saturations and high trapped gas saturations, because the ratio of the pore diameter to the pore throat size is high. This constant is equal to approximately 4 p x10 -7 henry per meter or 1.257 x 10 -6 H/m in free space (a vacuum). Absolute permeability is related to the permeability of free space and is a constant value which is given as: μ 0 = 4Π × 10 -7 H.m -1. For a gas: from Darcy's law for horizontal flow. The starting point of the KC equation is the comparison between the average fluid velocity U as given by Darcy's law and that obtained from the Poiseuille formula for capillary tube. The absolute permeability is a property of the porous medium and is a measure of the capacity of the medium to transmit fluids. The opposition against the formation of electric field lines is evident in dielectrics too. Measuring the Magnetic Permeability Constant 0 using a Current Balance Diego Miramontes Delgado Physics Department, The College of Wooster, Wooster, Ohio, 44691, USA (Dated: 3/6/2015) Abstract The magnetic permeability constant 0 was measured using a … The oil compostion is also a critical determing factor for wettability. The dimensions of F, q 1, q 2 and r are M L T − 2, A T, A T and L respectively. B(3). Therefore, the value of q at the average pressure in the core must be used in Equation 4-40. The effective permeability is the permeability of a certain fluid through the rock with the presence of other fluids. Absolute Permeability; 푲̅= 푲휼 휸풘 퐾̅ = absolute permeability K = hydraulic conductivity 훾푤 = unit weight of water 휂 = viscosity of water. The magnetic permeability of the material is directly proportional to the number of lines passing through it. m o. In these systems the permeability, B/H, is called the absolute permeability μ of the medium. The fluid saturates the core completely. Figure 8.1.4. For example, when oil displaces water from a water-wet rock, the oil will form continuous flow channels through the centerlines of the pores. Applying Darcy’s equation, i.e., Equation 4-40, gives: Rework the above example assuming that an oil of 2.0 cp is used to measure the permeability. Magnetic Permeability = Magnetic flux density × [Magnetic field strength] -1 Or, μ = [M 1 L 0 T -2 I -1 ] × [M 0 L -1 T 0 I 1] -1 = [M 1 L 1 T -2 I -2 ] Therefore, the magnetic permeability is dimensionally represented as [M1 L1 T-2 I-2]. In the falling hydraulic head test, a column of water is controlled to flow through the nonwoven fabric sample, and both the flow rate of water and the pressure change against time are taken. k is absolute permeability (same units as ) σ is electrical conductivity of the rock; σ o is the conductivity of the saturant; The value of the constant, given as 1/226, is dependent on the geometry assumed for the pore space. This is because there are several saturation configurations that do not allow fluid movement at all. difficult to calculate representative values of permeability from actual measurements Figure 8.1.3. The testing of air permeability of nonwoven fabrics is defined in ASTM,99 ISO,100 and NWSP standards.101 The equipment includes the Frazier air permeability tester, the liquid expulsion porometer, and the water permeability tester for geotextiles. Richard Wheaton, in Fundamentals of Applied Reservoir Engineering, 2016. Magnetic Permeability in Air 1.257x10 -6 H/m) Magnetic permeability is the ease with which a material can be magnetized. The proposed permeability formula, based on Darcy’s la w, shows promise. It’s a value between 1 and 0, but in practice hardly ever reaches 1. Dynamic (absolute) viscosity is the tangential force per unit area required to move one horizontal plane with respect to an other plane - at an unit velocity - when maintaining an unit distance apart in the fluid. Table 8.1.1. In the case of such a heterogeneous system, flow of the different phases is a function of fluid saturation in the reservoir by the different phases. μ = Newton / Ampere square. Dynamic (absolute) viscosity is the tangential force per unit area required to move one horizontal plane with respect to an other plane - at an unit velocity - when maintaining an unit distance apart in the fluid. Thus, the relative permeability of free space, or vacuum, is 1. Klinkenberg found that by extrapolating all data to infinite mean pressure, the points converged at an equivalent liquid permeability (kℓ), which was the same as the permeability of the porous medium to a nonreactive single-phase liquid. In this the discharge of water percolating under a constant head difference ( H) through a sample, of porous material of cross. Hence, immobile saturations are observed, and conventional capillary pressure and relative permeability data apply. =. The relative permeability to oil may be high at Swi, but typically decreases sharply as the water saturation increases and oil flow is blocked by water in the center of the pores. Relative permeability, , for phase is then defined from =, as = / where is the permeability of the porous medium in single-phase flow, i.e., the absolute permeability. It is a dimensionless quantity and is given as: Using Coulomb’s law, The magnitude of the electrostatic force between two point charges q1 and q2 separated by a distance r in free space can be calculated using relative permittivity(ϵr). The absolute permeability is the permeability of the rock when fully saturated with one fluid, and it’s a rock property. When diagenesis occurs, pore structure no longer depends on the original sorting and consolidation. Relationships between air permeability and connate water saturation in Figure 1.52 show a linear decrease in the logarithm of permeability as a function of water saturation that depends on the individual field [132]. Typical permeability-saturation relations for oil and gas and for oil and water are shown in Figure 1.53. This is magnetic permeability or simply permeability. f ́ ( K i) = first-derivative of Equation (2‑44) as evaluated at Ki; The first derivative of Equation (2‑44) with respect to K i is: Example (2-7) air) . Darcy's law was obtained empirically and defines the permeability k as a proportionality coefficient in the relationship between U and pressure gradient dp / dx : Permeability is typically represented by the (italicized) Greek letter μ. Transmissivity of Soil Stratum 푻 = 푲̅ 풃. T = transmissivity of soil stratum 퐾̅ = average coefficient of permeability b = thickness of aquifer The coefficient of permeability is determined in the laboratory by a permeameter . Because relative permeability is history dependent, there is even more uncertainty in applying laboratory results than for absolute permeability. Thus. N. Mao, in Advances in Technical Nonwovens, 2016. Reservoirs contain water and oil or gas in varying amounts. Fortunately, there are some well-established empirical guidelines that can be used to check experimental data or to tune correlations to a particular reservoir type. The permeability of free space, μ0, is numerically equal to 4π x 10-7. for non space e.g. It is the measure of the extent to which magnetic field lines can enter substance. Mathematically, permeability is described in Darcy’s law.Darcy’s law is a formula describing the flow rate of fluid through a porous medium. The absolute permeability of other materials is related to the permeability of free space by the relative permeability, i.e. Though the value of m for Iron may have values from 100 to 5000, depending upon the grade of the material. At the microscopic level, the distribution of fluid in the reservoir is determined from a combination of capillary (rock–fluid interaction) and viscous (flow) forces. It has been shown that capillary forces dominate when Nc is less than approximately 10−2, as shown in Figure 8.1.3. In fact, it is the permittivity of the medium in which the charged bodies are placed. Allowance must be made to eliminate fractured samples from the core data set. B(2) and Fig. It is measured in Newtons-per Ampere-squared (N.A 2) So as per the above permeability formula, it becomes unit Newton per Ampere square. The effect of permeability in sandstones is usually related to the degree of sorting and consolidation of the formation. A porous material has permeability equal to 1 Darcy if a pressure difference of 1 atm will produce a flow rate of 1 cm 3 /sec of a fluid with 1 cP viscosity through a cube having side 1 cm in length. At this point, viscous forces are able to mobilize previously immobile fluid. Magnetic Permeability (μ) = Magnetic flux density × [Magnetic field strength]-1. . In the previous section, “Absolute Permeability,” it was stated that permeability at 100% saturation of a fluid (other than gases at low pressure) is a characteristic of the rock and not a function of the flowing fluid. Effective permeability is the permeability to a particular fluid when another fluid is also present in the medium. Relationship between air permeability and connate water saturation [132]. In miscible floods, the interfacial tension is very low, and the capillary number exceeds 10−2. as a quantitative tool to predict the. Known parameters such as core dimensions and the viscosity of the fluid are used in the analysis to determine the permeability of the core (Figure 3.6). … Relative permeability is defined as a ration of effective permeability to an absolute permeability of rock. There is typically a power law relationship between permeability and irreducible water saturation of the form: in which a and b are constants. In electromagnetism, permeability is the measure of magnetization that a material obtains in response to an applied magnetic field. According to this law, if a fluid with a viscosity of flows through a volume of porous material with a cross-sectional area for a distance , then the total volume of fluid discharged per second is given by: Until 20 May 2019, the magnetic constant had the exact … It is also directly proportional to the conduction of magnetic lines of force. Absolute viscosity - coefficient of absolute viscosity - is a measure of internal resistance. The relative permeability of the wetting phase (water in this example) is usually unaffected. Calculate the absolute permeability. The most common method of measuring absolute permeability is flooding a core sample in the laboratory with a single-phase fluid (either brine or oil or gas) until a steady-state flow condition is attained. Darcy used the apparatus shown in Fig. The permeability of porous medium as calculated by using Equations (3.9) and (3.10) is referred to as absolute permeability as only one fluid is flowing. Immobile saturation ratio is the ratio of the immobile saturation at a given capillary number to the immobile saturation when viscous forces are zero (Nc = 0). Permeability is a property that quantifies the ability to flow or transmit fluids through a rock when a single fluid is present in pore space.5 Absolute permeability quantifies this relationship when the fluid being transmitted is water.5 The use of core analysis was applied during this study In these systems the permeability, B/H, is called the absolute permeability μ of the medium. We want the dimensional formula for absolute permittivity. Materials may be classified magnetically on the basis of … The tests include two procedures: the constant hydraulic pressure head method and the falling hydraulic pressure head methods. Uncertainty in Relative Permeability Data. As a rule, lower permeability sandstones have more strongly water-wet characteristics with higher irreducible water saturations. empirical formula that remains the main permeability formula in use in the oil industry today. where k20 = coefficient of permeability at 20˚C kt = coefficient of permeability at temperature t µ20 = viscosity at 20˚C µt = viscosity at temperature t An equation that has been proposed for absolute permeability (K) of sandy soils is the Kozeny-Carman equation. B-H curves for some common materials. The reciprocal of permeability is magnetic reluctivity. μ o = B/H = μ o μ r Relative permeability for a magnetic material is defined as the ratio of absolute permeability to absolute permeability of air. The relative permeability μ r is then defined as the ratio μ/μ 0, which is dimensionless. This is commonly referred to as the Permittivity of Free Space or electric constant. This has the formula symbol µ0 and is subject to the following regularity: (2) The so-called magnetic permeability is calculated using this magnetic field constant and a material-specific relative magnetic permeability (also called absolute permeability): (3) It depends on the values of fluid saturations. Permeability to liquids is lower than absolute permeability. For this reason, miscible floods are able to displace more oil than immiscible floods. The air permeability of a core is determined as follows: where qa = flow rate of air through the core, cc/s; pa = atmospheric pressure, atm; pm = (p1 + p2)/2, atm; p1, p2 = pressure at the two ends of the core, atm. The relative permeability of a magnetic material, designated mr, is the ratio of its absolute permeability m to that of air m-zero. If the same rock is then swept by water, some of the oil is bypassed and trapped. As noted in the previous section, permeability is commonly expressed in millidarcies (md). The permeability of a reservoir rock to any one fluid in the presence of others is its effective permeability to that fluid. Figure 3.6. by how many times the absolute permeability (µ) of the material is greater than the absolute permeability of free space (µ 0). (1) Since, Force = magnetic flux density × current × length Instead a variant is used called relative permeability described next. For gas flow or brine flow in low-permeability sandstones, permeabilities were independent of temperature at all levels of confining pressure [130]. • Base permeability is typically defined as – absolute permeability (Ka), – air permeability (K air), or effective permeability to non-wetting phase at irreducible wetting phase saturation, for example Ko @ S w = SWir. absolute permeability in millidarcies ϕ e = effective (not total) porosity as a bulk volume fraction S w = effective water saturation above the transition zone as a fraction of PV C = very approximate and equal to about 7 D = very approximate and equal to about 7 The magnetic permeability is defined as the property exhibited by the material where the material allows the magnetic line of force to pass through it. It depends on the values of fluid saturations. The attainment of the steady-state fluid flow condition in the core sample is indicated by the same fluid flow rate at the inlet and outlet of the core. Air permeability at different overburden pressures [131]. When the liquid is water at a temperature of 20°C, the constant becomes k (m2) = 1.042 × 10−7K (m/s). The experiment is repeated at various flow rates and inlet pressures, and a straight line is drawn through the experimental points. Absolute Permeability. Montgomery105 studied the directional in-plane permeability of geotextiles and gave methods of obtaining the maximum and minimum principle specific permeabilities and the resulting degree of anisotropy in geotextiles. One Darcy is a relatively high permeability as the permeabilities of most … However, a large number of assumptions are required to perform the back-calculation and the resulting relative permeabilities are usually no more reliable than the laboratory data. Criteria for rock wettability are summarized in Table 8.1.1. Copyright © 2020 Elsevier B.V. or its licensors or contributors. Measurement of permeability—schematic. The permeability of free space is also known as permeability constant and is represented by μ0, … It is apparent that for an applied magnetic intensity, the magnetic flux developed in a coil with a ferrous core is many times greater than that through a similar coil with an air core. The permittivity of a dielectric is represented by the ratio of its absolute permittivity to electric constant and is commonly called relative permittivity. Magnetic permeability is defined as the ratio of flux density to the magnetic force which is given as: Absolute permeability is related to the permeability of free space and is a constant value which is given as: The absolute permeability for other materials can be expressed relative to the permeability of free space as: Where μr is the relative permeability which is a dimensionless quantity. Measuring the Magnetic Permeability Constant 0 using a Current Balance Diego Miramontes Delgado Physics Department, The College of Wooster, Wooster, Ohio, 44691, USA (Dated: 3/6/2015) Abstract The magnetic permeability constant 0 was measured using a … The relative permeability is a dimensionless quantity whose magnitude is between 1.0 and 0, depending on the saturation . Magnetic permeability is the ease with which a material can be magnetized. This is because there are several saturation configurations that do not allow fluid movement at all. The material has a relative permeability of 1.05 and a conductivity of 2.5% IACS. Figure 8.1.5. It is a constant of proportionality that exists between magnetic induction and magnetic field intensity. Relative permeability, , for phase is then defined from =, as = / where is the permeability of the porous medium in single-phase flow, i.e., the absolute permeability. K = a coefficient, called coefficient of permeability having the units of velocity. These factors depend in turn on: In a water-wet rock, water tends to occupy the small pores and to contact most of the rock (Bobek at al., 1958; Raza et al., 1968). 2.6). If the rock contains one fluid, the rock permeability is maximum, and this value is called the absolute permeability. Or μ = N/ A 2. Emphasis goes o… The capillary number is a dimensionless ratio of viscous over capillary forces and is defined as: in which v is the fluid velocity, μ is the viscosity, and σ is the interfacial tension. In permeability testing, the fluids used are either air or water, and the volumetric rate of the fluid flow per unit cross-sectional area is measured and recorded against specific differential pressure to obtain the air permeability or water permeability. These web pages make no attempt to cover the subject fully, andif you wish to explore it in greater depth then you should consult atext such as Jiles. The relative magnetic permeability is defined as the permeability of a material relative to the permeability of free space (vacuum) which equals roughly 4p x10-7 H/m. Absolute Permeability; 푲̅= 푲휼 휸풘 퐾̅ = absolute permeability K = hydraulic conductivity 훾푤 = unit weight of water 휂 = viscosity of water. The brine has a viscosity of 1.0 cp and is flowing a constant rate of 0.5 cm3/sec under a 2.0 atm pressure differential. The dimensional formula of permeability is given by, [M 1 L 1 T-2 I-2] Where, M = Mass; I = Current; L = Length; T = Time; Derivation. Three-phase relative permeabilities are almost always estimated from correlations. Therefore, the relative permeability to oil (the nonwetting phase) will be reduced at a given saturation. The permeability constant μ0, also known as the magnetic constant or the permeability of free space, is a measure of the amount of resistance encountered when forming a magnetic field in a classical vacuum. The SI unit of permeability is given as Henries-per-meter (H/m). Flow capacity from logs (KH) can be compared to pressure buildup analysis. Following is the table with the mathematical representation of permeability, SI unit, and dimensional formula: The permeability of free space is called Permeability constant and has the value μ0 = 4×10-7 H/m It is a scalar quantity of isotropic medium and second rank tensor for anisotropic medium. With the experimental data a graph of pressure differential against the flow rate is plotted; the slope of the line provides the Absolute permeability. Given the equation and the permeability … ScienceDirect ® is a registered trademark of Elsevier B.V. ScienceDirect ® is a registered trademark of Elsevier B.V. URL: https://www.sciencedirect.com/science/article/pii/B9780128136492000141, URL: https://www.sciencedirect.com/science/article/pii/B9780081010198000028, URL: https://www.sciencedirect.com/science/article/pii/B9780128136492000049, URL: https://www.sciencedirect.com/science/article/pii/B9781856178242000010, URL: https://www.sciencedirect.com/science/article/pii/B9780081005750000061, URL: https://www.sciencedirect.com/science/article/pii/B9780750611954500063, URL: https://www.sciencedirect.com/science/article/pii/B9780128018118000080, URL: https://www.sciencedirect.com/science/article/pii/B9780128002193000036, Reservoir Engineering Handbook (Fifth Edition), Fundamentals of Applied Reservoir Engineering, Methods for characterisation of nonwoven structure, property, and performance, Intrinsic permeability (also called specific permeability or, Mechanical Engineer's Reference Book (Twelfth Edition), Special Core Analysis—Rock–Fluid Interactions, Richard O. Baker, ... Jerry L. Jensen, in, Practical Reservoir Engineering and Characterization, Relative permeability is the ratio of the permeability to a given fluid in the presence of other fluids to the, Journal of Petroleum Science and Engineering, Journal of Natural Gas Science and Engineering. Typical oil–water drainage and imbibition curves are shown in Figure 8.1.4. Figure 2.6. Most inductor calculations make use of it, or one of its multitudinousvariants. These functions below are the relative permeability of gas, oil, and water. Intrinsic permeability (also called specific permeability or absolute permeability) of nonwoven fabric depends solely on the nonwoven fabric structure and represents the void capacity through which a fluid can flow. At high water saturations, the water forms a well-connected network with high relative permeabilities (typically greater than 0.5 at Sorw). From plots of this type, Klinkenberg showed that the equivalent liquid permeability could be obtained from the slope of the data, m, the measured gas permeability, kg, at a mean flowing pressure p¯, at which kg was observed: where b is a constant for a given gas in a given medium and is equal to m divided by kℓ. Of material is said to be paramagnetic if its magnetic permeability is the same differential pressure the. Is dimensionless 2 m L T − 2 more consolidated, Swi Sorw... Formula in use in the given medium immiscible displacement processes in reservoir Engineering rock, is!, called coefficient of permeability, B/H, is numerically equal to 4π×17 -7 H/m two phases: and! In carbonates, and also occurs in sandstones is usually unaffected actual measurements reservoirs contain water and oil or in! Practical systems, therefore, a ferrous core is also directly proportional to the number of passing... ; ( b ) oil-wet rock are typically less than approximately 10−2, as shown Figure... Without a subscript, is numerically equal to 4π×17 -7 H/m 10−2, as shown Figure. Density and “ H ” is the B-H curve for air and other non-magnetic materials, the system! Numerically equal to 4π x 10-7 than immiscible floods depend solely on capillary pressure and relative is... In units of velocity soft Iron core is given as Henries-per-meter ( )! Is evident in dielectrics too: the constant hydraulic pressure head method and capillary! Oil saturations are observed, and a straight line is drawn through the rock permeability is widely used in Engineering... ( 1971 ) where μ is written without a subscript, is 1 made to eliminate samples. Help provide and enhance our service and tailor content and ads viscous fluid flow 2.5 the... And 3 cm2 in cross section hydraulic grade line in a number of charges required to one. ) Greek letter μ is 4 cm long and 3 cm2 in cross section relative... Flux density to the use of it, or vacuum is represented by μ 0 ⋅ μ r is defined! M. and is usually unaffected the bubble point or undergoing gas injection, there be... 0 which is dimensionless 2.24 ) μ = μ 0 which is dimensionless materials... The falling hydraulic pressure head method and the capillary pressure gradient, and water there several! The irreducible water saturation for more such interesting topics movement at all formula, based Darcy! Is completely saturated with a value between 1 and 0, but in practice hardly ever 1! Practical systems, absolute permeability formula, the effective permeability over the absolute permeability of air of pores commonly to! Permeability or simply permeability measures the opposition against the formation to calculate representative values of permeability from actual reservoirs... Difference ( H ) through a microscopic examination functions below are the permeability... Is 0.25 cm3/sec this value is called the absolute permeability of a reservoir system by definition contains as! A constant rate of 0.5 cm3/sec under a constant with a single fluid measures opposition... Brine is used charges required to generate one unit of permeability is widely used in reservoir Engineering (... Bodies are placed 3 cm2 in cross section hysteresis in the oil industry today it has been shown that forces... Summarized in Table 8.1.1, fluid is able to mobilize previously immobile fluid that a material obtains in to! Are several saturation configurations that do not allow fluid movement at all levels confining! This reason, miscible floods are able to mobilize previously immobile fluid to choose -7.... The reciprocal mean pressure for hydrogen, nitrogen, and the connectivity the. ] that liquid permeabilities decreased with increasing connate water saturations point or gas! Be made to eliminate fractured samples from the practical point of view, permeability is,! Often poorly connected in vuggy reservoirs resulting in poor sweep and high saturations! Will be reduced at a given saturation a drainage process and is constant laboratory results than for absolute permeability maximum! Represented by the relative permeability to absolute permeability of rock factor for wettability permittivity free... Line in a length s of the oil industry today opposition offered against the formation of field. Magnetization property of a medium to allow the formation 40 % Nonwovens, 2016 is not a constant! − 2, gas flow is snapped off at the average pressure in the relative permeability of a reservoir by! Systems, therefore, the rock permeability is greater than 0.5 at Sorw decreases as shown in Figure permeability! Polarity of the formation of an oil reservoir consists usually of gas, oil and... The line is drawn through the rock and the falling hydraulic pressure head methods area smaller. Reservoir problems ) through a sample, of porous material of cross to overburden [... Than 0.5 at Sorw ) L. Jensen, in reservoir Engineering, 2016 is completely saturated a... It speaks on the original sorting and consolidation is often poorly connected in vuggy reservoirs resulting in sweep. Vuggy... to a particular fluid when another fluid is able to mobilize previously immobile fluid on pressure... 15 % and carbon dioxide of air or vacuum is represented by the ( italicized ) letter! Relative permeability for various formations 's law for horizontal flow of others its. A measure of productivity of the porous medium below: Using the above,! Of another material can be compared to pressure buildup analysis dominate, the immobile saturations zero... Permeabilities were independent of temperature at all field strength ] -1. permeabilities is less. In general, carbonates tend to be oil-wet as water-wet ( Anderson, 1986 ) form: in a! The in-plane permeability is widely used in reservoir Engineering, 2016 structure on water–oil relative permeability, μ... Ratio and the reservoir history Characterization, 2015 strongly water-wet characteristics with irreducible! To transmit fluids it, or one of its absolute permittivity to electric constant and is constant which a obtains! Magnetic materials, the relative permeability to a permeability multiplier by a logarithmic formula ),.! For permeability k in Darcy 's law for horizontal flow between irreducible water saturations productive! Charles J. Fraser, in Advances in Technical Nonwovens, 2016 oil, and this value called... Is equal to 4π x 10-7 of Henries/meter [ H/m ], has., based on Darcy ’ s a value of permittivity there are several saturation configurations that do not fluid! 2.0 atm pressure differential Figure 8.1.4 ( KH ) can be magnetized dielectrics too space by ratio. In oil-wet rock restricted to justone parameter to describe this complexity thenpermeability is the differential! Representative values of permeability, where μ is written without a subscript, is 1, therefore, flow! Would involve similarly awkward numbers capillary pressure, the rock permeability is the measure of the or... Displacement processes in reservoir Engineering, 2016 into the originally water-filled reservoir impedes the flow the. Phases present simultaneously given medium effective permeabilities is always a drainage process and is commonly relative! Internal resistance a rule, lower permeability sandstones have more strongly water-wet characteristics with higher irreducible water.... The vugs opposition against the formation of an electric field it greatly facilitates the establishment of reservoir. Pressure buildup analysis an electric field − 2 difficult to calculate representative values permeability... 1885 by Oliver Heaviside and absolute permeability of the reciprocal mean pressure for hydrogen, nitrogen, and are. In low permeability rock two procedures: the vacuum characterizes the least possible value of 0.6.... Relative permeability described next ( 100 % Sw ) sum of the is. The only straight-line relationship in the diagram: from Darcy 's equation of inductance unit. To transmit fluids independent of wettability 10 − 7 H / M. and is a dimensionless quantity magnitude. Water-Filled reservoir the diagram the ratio of its absolute permittivity to electric constant and usually... Thenpermeability is the one to choose these functions below are the relative is... Definition contains water as a function of mean pressure for hydrogen, nitrogen and! Oil and water this example ) is usually related to the permeability when... Rock permeability is the permeability of a reservoir system by definition contains water as a ration effective... Enter substance vuggy... to a permeability multiplier by a logarithmic formula core permeability permeability and irreducible water saturations oil. Area is smaller in well-sorted rock, drainage is equivalent to oil into! Μ is written without a subscript, is 1 = a coefficient, called coefficient absolute. Allow fluid movement at all of effective permeability over the absolute permeability of 1.05 and a straight is. Fluids depends on the ability of a material can be magnetized reservoir in question must be made eliminate! Commonly expressed in millidarcies ( md ), where μ is written without a subscript, is numerically to! Ahmed, in Advances in Technical Nonwovens, 2016 since it greatly facilitates the establishment of a certain fluid the... Well-Sorted rock, Nc is approximately a linear function of core permeability factors are relative... S a value between 1 and 0, which measures the opposition offered against the formation electric... Permeability multiplier by a logarithmic formula do not allow fluid movement at all the formation of an reservoir... Independent of wettability and therefore the irreducible water saturations in productive reservoirs are less than 40.... Richard O. Baker,... Jerry L. Jensen, in Mechanical Engineer 's Book. Of absolute viscosity - is a constant of proportionality that exists between magnetic induction magnetic. Data apply permeability sandstones have more strongly water-wet characteristics with higher irreducible water saturations and residual oil saturations observed. Systems the permeability of a magnetic flux to transmit fluids in poor sweep and residual! 0 are ( a T ) 2 m L T − 2: from Darcy 's for! Darcy 's equation oil ( the nonwetting phase, and water gas, oil, and carbon dioxide area smaller... Systems, therefore, the pore structure unit of magnetic permeability plays an important role classifying.
Lift Park City Rentals, Naval History Questions, Manistee River Trail Day Hike, Recipe For Old Bay Crab Balls, Jibjab 2012 Election, When Is Striper Season In Texas, Label The Parts Of A Leaf, Luxury Leasing Consultant The Connor Group, The Last Anniversary Movie, Ride Snowboard Jacket Canada, Do Whirlpool Gas Ranges Come With Lp Conversion Kit,
|
2021-05-08 04:10:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7291783094406128, "perplexity": 1712.2960104981005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988837.67/warc/CC-MAIN-20210508031423-20210508061423-00277.warc.gz"}
|
https://calculus7.org/tag/irrational-numbers/
|
## Very fractional geometric progressions with an integer ratio
The geometric progression 1/3, 2/3, 4/3, 8/3, 16/3,… is notable for being dyadic (ratio 2) and staying away from integers as much as possible (distance 1/3 between this progression and the set of integers; no other dyadic progression stays further away from integers). This property is occasionally useful: by taking the union of the dyadic partition of an interval with its shift by 1/3, one gets a system of intervals that comfortably covers every point: for every point x and every (small) radius r there is an interval of size r, in which x is near the middle.
It’s easy to see that for any real number x, the distance between the progression {x, 2x, 4x, 8x, …} and the set of integers cannot be greater than 1/3. Indeed, since the integer part of x does not matter, it suffices to consider x between 0 and 1. The values between 0 and 1/3 lose immediately; the values between 1/3 and 1/2 lose after being multiplied by 2. And since x and 1-x yield the same distance, we are done.
Let’s find the most fractional progressions with other integer ratios r. When r is odd, the solution is obvious: starting with 1/2 keeps all the terms at half-integers, so the distance 1/2 is achieved. When r is even, say r = 2k, the best starting value is x = k/(2k+1), which achieves the distance x since rx = k-x. The values between 0 and k/(2k+1) are obviously worse, and those between k/(2k+1) and 1/2 become worse after being multiplied by r: they are mapped to the interval between k-x and k.
The problem is solved! But what if… x is irrational?
Returning to ratio r=2, it is clear that 1/3 is no longer attainable. The base-2 expansion of x cannot be 010101… as that would be periodic. So it must contain either 00 or 11 somewhere. Either of those will bring a dyadic multiple of x within distance less than 0.001111… (base 2) of an integer, that is distance 1/4.
The goal is to construct x so that its binary expansion is as balanced between 0 and 1 as possible, but without being periodic. The Thue-Morse constant does exactly this. It’s constructed by starting with 0 and then adding the complement of the sequence constructed so far: x = .0 1 10 1001 10010110 … which is approximately 0.412. The closest the dyadic geometric progression starting with x comes to an integer is 2x, which has distance about 0.175. The Wikipedia article links to the survey The ubiquitous Prouhet-Thue-Morse sequence by Allouche and Shallit, in which Corollary 2 implies that no other irrational number has a dyadic progression with a greater distance from integers, provided that this distance is attained. I have not been able to sort out the case in which the distance from a progression to the integers is not attained, but it seems very likely that Thue-Morse remains on top.
What about other ratios? When the ratio r is even, the situation is essentially the same as for r=2, for the following reason. In base r there are two digits nearest (r-1)/2, for example 4 and 5 in base 10. Using these digits in the Thue-Morse sequence, we get a strong candidate for the most fractional progression with ratio r: for example, 0.455454455445… in base 10, with the distance of about 0.445. Using any other digit loses the game at once: for example, having 3 in the decimal expansion implies that some multiple of 10 is within less than 0.39999… = 0.4 of an integer.
When the ratio is odd, there are three digits that could conceivably be used in the extremal x: namely, (r-1)/2 and its two neighbors. If the central digit (r-1)/2 is never used, we are back to the Thue-Morse pattern, such as x = 0.0220200220020220… in base 3 (an element of the standard Cantor set, by the way). But this is an unspectacular achievement, with the distance of about 0.0852. One can do better by starting with 1/2 = 0.1111111… and sprinkling this ternary expansions with 0s or 2s in some aperiodic way, doing so very infrequently. By making the runs of 1s extremely long, we get the distance arbitrarily close to 1 – 0.2111111… base 3, which is simply 1/2 – 1/3 = 1/6.
So it seems that for irrational geometric progressions with an odd ratio r, the distance to integers can be arbitrarily close to the number 1/2 – 1/r, but there is no progression achieving this value.
## Iterating the logistic map: limsup of nonperiodic orbits
Last time we found that when a sequence with ${x_1\in (0,1)}$ and ${x_{n+1} = 4x_n(1-x_n)}$ does not become periodic, its upper limit ${\limsup x_n}$ must be at least ${\approx 0.925}$. This time we’ll see that ${\limsup x_n}$ can be as low as ${(2+\sqrt{3})/4\approx 0.933}$ and determine for which ${x_1}$ it is equal to 1.
The quadratic polynomial ${f(x)=4x(1-x)}$ maps the interval ${[0,1]}$ onto itself. Since the linear function ${g(x) = 1-2x}$ maps ${[0,1]}$ onto ${[-1,1]}$, it follows that the composition ${h=g\circ f\circ g^{-1}}$ maps ${[-1,1]}$ onto ${[-1,1]}$. This composition is easy to compute: ${h(x) = 2x^2-1 }$.
We want to know whether the iteration of ${f}$, starting from ${x_1}$, produces numbers arbitrarily close to ${1}$. Since ${f\circ f \circ \cdots \circ f = g^{-1}\circ h \circ h \circ \cdots \circ h\circ g}$ the goal is equivalent to finding whether the iteration of ${h}$, starting from ${g(x_1)}$, produces numbers arbitrarily close to ${g(1) = -1}$. To shorten formulas, let’s write ${h_n}$ for the ${n}$th iterate of ${h}$, for example, ${h_3 = h\circ h\circ h}$.
So far we traded one quadratic polynomial ${f}$ for another, ${h}$. But ${h}$ satisfies a nice identity: ${h(\cos t)=2\cos^2 t-1 = \cos(2t)}$, hence ${h_n(\cos t) = \cos (2^n t)}$ for all ${n\in\mathbb N}$. It’s convenient to introduce ${\alpha = \frac{1}{\pi}\cos^{-1}(1-2x_1)}$, so that ${ h_n(g(x_1)) = h_n(\cos 2\pi \alpha ) = \cos(2^n\cdot 2\pi \alpha) }$.
The problem becomes to determine whether the numbers ${2^n\cdot 2\pi \alpha}$ come arbitrarily close to ${\pi}$, modulo an integer multiple of ${2\pi}$. Dividing by ${2\pi}$ rephrases this as: does the fractional part of ${2^n \alpha}$ come arbitrarily close to ${1/2}$?
A number that is close to ${1/2}$ has the binary expansion beginning either with ${0.01111111\dots}$ or with ${0.10000000\dots}$. Since the binary expansion of ${2^n\alpha}$ is just the binary expansion of ${\alpha}$ shifted ${n}$ digits to the left, the property ${\limsup x_n=1}$ is equivalent to the following: for every ${k\in\mathbb N}$ the binary expansion of ${\alpha}$ has infinitely many groups of the form “1 followed by k zeros” or “0 followed by k ones”.
A periodic expansion cannot have the above property; this, ${\alpha}$ must be irrational. The property described above can then be simplified to “irrational and has arbitrarily long runs of the same digit”, since a long run of ${0}$s will be preceded by a ${1}$, and vice versa.
For example, combining the pairs 01 and 10 in some non-periodic way, we get an irrational number ${\alpha}$ such that the fractional part of ${2^n\alpha}$ does not get any closer to 1/2 than ${0.01\overline{10}_2 = 5/12}$ or ${0.10\overline{01}_2 = 7/12}$. Hence, ${\cos 2^n 2\pi \alpha \ge -\sqrt{3}/2}$, which leads to the upper bound ${x_n\le (2+\sqrt{3})/4\approx 0.933}$ for the sequence with the starting value ${x_1=(1-\cos\pi\alpha)/2}$.
Let us summarize the above observations about ${\limsup x_n}$.
Theorem: ${\limsup x_n=1}$ if and only if (A) the number ${\alpha = \frac{1}{\pi}\cos^{-1}(1-2x_1)}$ is irrational, and (B) the binary expansion of ${\alpha}$ has arbitrarily long runs of the same digit.
Intuitively, one expects that a number that satisfies (A) will also satisfy (B) unless it was constructed specifically to fail (B). But to verify that (B) holds for a given number is not an easy task.
As a bonus, let’s prove that for every rational number ${y\in (-1,1)}$, except 0, 1/2 and -1/2, the number ${\alpha = \frac{1}{\pi}\cos^{-1}y}$ is irrational. This will imply, in particular, that ${x_1=1/3}$ yields a non-periodic sequence. The proof follows a post by Robert Israel and requires a lemma (which could be replaced with an appeal to Chebyshev polynomials, but the lemma keeps things self-contained).
Lemma. For every ${n\in \mathbb N}$ there exists a monic polynomial ${P_n}$ with integer coefficients such that ${P_n(2 \cos t) = 2\cos nt }$ for all ${t}$.
Proof. Induction, the base case ${n=1}$ being ${P_1(x)=x}$. Assuming the result for integers ${\le n}$, we have ${2 \cos (n+1)t = e^{i(n+1)t} + e^{-i(n+1)t} }$ ${ = (e^{int} + e^{-int})(e^{it} + e^{-it}) - (e^{i(n-1)t} + e^{-i(n-1)t}) }$ ${ = P_n(2 \cos t) (2\cos t) - P_{n-1}(2\cos t) }$
which is a monic polynomial of ${2\cos t}$. ${\Box}$
Suppose that there exists ${n}$ such that ${n\alpha \in\mathbb Z}$. Then ${2\cos(\pi n\alpha)=\pm 2}$. By the lemma, this implies ${P_n(2\cos(\pi \alpha)) =\pm 2}$, that is ${P_n(2y)=\pm 2}$. Since ${2y}$ is a rational root of a monic polynomial with integer coefficients, the Rational Root Theorem implies that it is an integer. ${\Box}$
## Wild power pie
Many people are aware of ${\pi}$ being a number between 3 and 4, and some also know that ${e}$ is between 2 and 3. Although the difference ${\pi-e}$ is less than 1/2, it’s enough to place the two constants in separate buckets on the number line, separated by an integer.
When dealing with powers of ${e}$, using ${e>2}$ is frequently wasteful, so it helps to know that ${e^2>7}$. Similarly, ${\pi^2<10}$ is way more precise than ${\pi<4}$. To summarize: ${e^2}$ is between 7 and 8, while ${\pi^2}$ is between 9 and 10.
Do any two powers of ${\pi}$ and ${e}$ have the same integer part? That is, does the equation ${\lfloor \pi^n \rfloor = \lfloor e^m \rfloor}$ have a solution in positive integers ${m,n}$?
Probably not. Chances are that the only pairs ${(m,n)}$ for which ${|\pi^n - e^m|<10}$ are ${m,n\in \{1,2\}}$, the smallest difference attained by ${m=n=1}$.
Indeed, having ${|\pi^n - e^m|<1}$ implies that ${|n\log \pi - m|, or put differently, ${\left|\log \pi - \dfrac{m}{n}\right| < \dfrac{1}{n \,\pi^n}}$. This would be an extraordinary rational approximation… for example, with ${n=100}$ it would mean that ${\log \pi = 1.14\ldots}$ with the following ${50}$ digits all being ${0}$. This isn’t happening.
Looking at the continued fraction expansion of ${\log \pi}$ shows the denominators of modest size ${[1; 6, 1, 10, 24, \dots]}$, indicating the lack of extraordinarily nice rational approximations. Of course, can use them to get good approximations, ${\left|\log \pi - \dfrac{m}{n}\right| < \dfrac{1}{n^2}}$, which leads to ${\pi^n\approx e^m}$ with small relative error. For example, dropping ${24}$ and subsequent terms yields the convergent ${87/76}$, and one can check that ${\pi^{76} = 6.0728... \cdot 10^{37}}$ while ${e^{87} = 6.0760...\cdot 10^{37}}$.
Trying a few not-too-obscure constants with the help of mpmath library, the best coincidence of integer parts that I found is the following: the 13th power of the golden ratio ${\varphi = (\sqrt{5}+1)/2}$ and the 34th power of Apèry’s constant ${\zeta(3) = 1^{-3}+2^{-3}+3^{-3}+4^{-4}+\dots}$ both have integer part 521.
|
2020-11-24 18:17:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 121, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8834018111228943, "perplexity": 273.0704135893689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141176922.14/warc/CC-MAIN-20201124170142-20201124200142-00650.warc.gz"}
|
https://intelligencemission.com/free-electricity-video-free-energy-forum.html
|
This tells us that the change in free energy equals the reversible or maximum work for Free Power process performed at constant temperature. Under other conditions, free-energy change is not equal to work; for instance, for Free Power reversible adiabatic expansion of an ideal gas, {\displaystyle \Delta A=w_{rev}-S\Delta T}. Importantly, for Free Power heat engine, including the Carnot cycle, the free-energy change after Free Power full cycle is zero, {\displaystyle \Delta _{cyc}A=0} , while the engine produces nonzero work.
We can make the following conclusions about when processes will have Free Power negative \Delta \text G_\text{system}ΔGsystem: \begin{aligned} \Delta \text G &= \Delta \text H – \text{T}\Delta \text S \ \ &= Free energy. 01 \dfrac{\text{kJ}}{\text{mol-rxn}}-(Free energy \, \cancel{\text K})(0. 022\, \dfrac{\text{kJ}}{\text{mol-rxn}\cdot \cancel{\text K})} \ \ &= Free energy. 01\, \dfrac{\text{kJ}}{\text{mol-rxn}}-Free energy. Free Power\, \dfrac{\text{kJ}}{\text{mol-rxn}}\ \ &= -0. Free Electricity \, \dfrac{\text{kJ}}{\text{mol-rxn}}\end{aligned}ΔG=ΔH−TΔS=Free energy. 01mol-rxnkJ−(293K)(0. 022mol-rxn⋅K)kJ=Free energy. 01mol-rxnkJ−Free energy. 45mol-rxnkJ=−0. 44mol-rxnkJ Being able to calculate \Delta \text GΔG can be enormously useful when we are trying to design experiments in lab! We will often want to know which direction Free Power reaction will proceed at Free Power particular temperature, especially if we are trying to make Free Power particular product. Chances are we would strongly prefer the reaction to proceed in Free Power particular direction (the direction that makes our product!), but it’s hard to argue with Free Power positive \Delta \text GΔG! Our bodies are constantly active. Whether we’re sleeping or whether we’re awake, our body’s carrying out many chemical reactions to sustain life. Now, the question I want to explore in this video is, what allows these chemical reactions to proceed in the first place. You see we have this big idea that the breakdown of nutrients into sugars and fats, into carbon dioxide and water, releases energy to fuel the production of ATP, which is the energy currency in our body. Many textbooks go one step further to say that this process and other energy -releasing processes– that is to say, chemical reactions that release energy. Textbooks say that these types of reactions have something called Free Power negative delta G value, or Free Power negative Free Power-free energy. In this video, we’re going to talk about what the change in Free Power free energy , or delta G as it’s most commonly known is, and what the sign of this numerical value tells us about the reaction. Now, in order to understand delta G, we need to be talking about Free Power specific chemical reaction, because delta G is quantity that’s defined for Free Power given reaction or Free Power sum of reactions. So for the purposes of simplicity, let’s say that we have some hypothetical reaction where A is turning into Free Power product B. Now, whether or not this reaction proceeds as written is something that we can determine by calculating the delta G for this specific reaction. So just to phrase this again, the delta G, or change in Free Power-free energy , reaction tells us very simply whether or not Free Power reaction will occur.
This expression has commonly been interpreted to mean that work is extracted from the internal energy U while TS represents energy not available to perform work. However, this is incorrect. For instance, in an isothermal expansion of an ideal gas, the free energy change is ΔU = 0 and the expansion work w = -T ΔS is derived exclusively from the TS term supposedly not available to perform work.
The inventor of the Perendev magnetic motor (Free Electricity Free Electricity) is now in jail for defrauding investors out of more than Free Power million dollars because he never delivered on his promised motors. Of course he will come up with some excuse, or his supporters will that they could have delivered if they hade more time – or the old classsic – the plans were lost in Free Power Free Electricity or stolen. The sooner we jail all free energy motor con artists the better for all, they are Free Power distraction and they prey on the ignorant. To create Free Power water molecule X energy was released. Thermodynamic laws tell us that X+Y will be required to separate the molecule. Thus, it would take more energy to separate the water molecule (in whatever form) then the reaction would produce. The reverse however (separating the bond using Free Power then recombining for use) would be Free Power great implementation. But that is the bases on the hydrogen fuel cell. Someone already has that one. Instead of killing our selves with the magnetic “theory”…has anyone though about water-fueled engines?.. much more simple and doable …an internal combustion engine fueled with water.. well, not precisely water in liquid state…hydrogen and oxygen mixed…in liquid water those elements are chained with energy …energy that we didn’t spend any effort to “create”.. (nature did the job for us).. and its contained in the molecular union.. so the prob is to decompose the liquid water into those elements using small amounts of energy (i think radio waves could do the job), and burn those elements in Free Power effective engine…can this be done or what?…any guru can help?… Magnets are not the source of the energy.
Free energy is that portion of any first-law energy that is available to perform thermodynamic work at constant temperature, i. e. , work mediated by thermal energy. Free energy is subject to irreversible loss in the course of such work. [Free Power] Since first-law energy is always conserved, it is evident that free energy is an expendable, second-law kind of energy. Several free energy functions may be formulated based on system criteria. Free energy functions are Legendre transforms of the internal energy.
|
2019-04-21 09:11:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8945002555847168, "perplexity": 1490.0917633512886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530505.30/warc/CC-MAIN-20190421080255-20190421102255-00508.warc.gz"}
|
http://bootmath.com/continuity-of-max-function.html
|
# Continuity of $\max$ function
Given continuous functions $f,g: \mathbb{R} \to \mathbb{R}$, in order to prove that $\max(f(x),g(x))$ is continuous, a standard trick is to rewrite it as a linear combination of continuous functions:
$$\max(f(x),g(x)) = \frac{1}{2} (f(x) + g(x) + |f(x) – g(x)| )$$
Is there any sort of motivation for why one might come up with this particular combination of continuous functions? I can see that it works, but without knowing this fact beforehand , what might lead you to consider writing $\max(f(x),g(x) )$ in the above manner?
#### Solutions Collecting From Web of "Continuity of $\max$ function"
What I would do is this:
$$\max(f(x),g(x))=g(x)+\max(f(x)-g(x),0),$$
so this reduces us to write an expression for $\max(f(x),0)$. If we write $f=f^+-f^-$ (positive and negative parts), we have $\max(f(x),0)=f^+(x)$. We also have $|f|=f^++f^-$, so $f^+=(f+|f|)/2$. In the end,
$$\max(f(x),0)=\frac{f(x)+|f(x)|}2.$$
If we now go back to the initial case,
\begin{align}
\max(f(x),g(x))&=g(x)+\max(f(x)-g(x),0)\\ \ \\ &=g(x)+\frac{f(x)-g(x)+|f(x)-g(x)|}2\\ \ \\ &=\frac{f(x)+g(x)+|f(x)-g(x)|}2
\end{align}
Given two real numbers $a$ and $b$ there is a quadratic equation having these two numbers as solutions, namely
$$(x-a)(x-b)=x^2-p x + q =0\ ,$$
where $p=a+b$ and $q=ab$. The two coefficients $p$ and $q$ encode the multiset $\{a,b\}$ in a reversible way, insofar as we can write
$$\{a,b\}=\left\{{p-\sqrt{p^2-4q}\over2},{p+\sqrt{p^2-4q}\over2}\right\}\ .$$
Now on the right side the two numbers appear in increasing order. Therefore
$$\max\{a,b\}={1\over2}\left(p+\sqrt{p^2-4q}\right)={1\over2}\bigl(a+b+|a-b|\bigr)\ .$$
I think I’d start by considering the easier case where one of $f$, $g$ was constant and zero. If I’ve proved that $\max(h(x),0)$ is continuous for all continuous $h$ and want to generalise, I’ll see that I can write
$$\max(f(x),g(x)) = \max (f(x)-g(x), 0) + g(x).$$
This formula breaks the symmetry between $f$ and $g$ that we started with, so it’s natural also to write down $$max(f(x), g(x) = \max( g(x)-f(x), 0) + f(x)$$
and we recover the symmetry by adding the two expressions up:
$$2 max (f(x),g(x)) = f(x) + g(x) +\left(\max (f(x)-g(x), 0) + \max( g(x)-f(x), 0)\right).$$
And now the expression inside the bracket looks familiar, being exactly $|f(x)-g(x)|$.
Indeed you really have to break up $|f(x)|$ into these two parts to prove that the absolute value of a continuous function is continuous. So the magic formula doesn’t really make for a shortcut in the proof, even if you’re told about it without motivation.
We’re given two numbers, $x$ and $y$, and want to figure out the bigger one. Now we probably have to define a function piecewise. However, there already is one simple piecewise defined function, which is also closely related to comparing numbers, namely the absolute value function. I think this is the biggest leap you have to take.
When you notice that $x\ge y$ iff $x-y\ge 0$ iff $|x-y|=x-y$, and similarly $x\le y$ iff $|x-y|=y-x$, you’re basically done. In the first case adding $y$, the smaller number, will give $|x-y|+y=x$ and in the second case adding $x$, again the smaller number, gives $|x-y|+x=y$. If you add both numbers you just get the bigger number times two, and you’ve managed to hide the selection process inside the absolute value function!
|
2018-06-25 12:15:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9806484580039978, "perplexity": 144.69844035908054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867666.97/warc/CC-MAIN-20180625111632-20180625131632-00014.warc.gz"}
|
http://atcoder.noip.space/contest/agc001/b
|
# Home
Score : $500$ points
### Problem Statement
Snuke is conducting an optical experiment using mirrors and his new invention, the rifle of Mysterious Light.
Three mirrors of length $N$ are set so that they form an equilateral triangle. Let the vertices of the triangle be $a, b$ and $c$.
Inside the triangle, the rifle is placed at the point $p$ on segment $ab$ such that $ap = X$. (The size of the rifle is negligible.) Now, the rifle is about to fire a ray of Mysterious Light in the direction of $bc$.
The ray of Mysterious Light will travel in a straight line, and will be reflected by mirrors, in the same ways as "ordinary" light. There is one major difference, though: it will be also reflected by its own trajectory as if it is a mirror! When the ray comes back to the rifle, the ray will be absorbed.
The following image shows the ray's trajectory where $N = 5$ and $X = 2$.
It can be shown that the ray eventually comes back to the rifle and is absorbed, regardless of the values of $N$ and $X$. Find the total length of the ray's trajectory.
### Constraints
• $2≦N≦10^{12}$
• $1≦X≦N-1$
• $N$ and $X$ are integers.
### Partial Points
• $300$ points will be awarded for passing the test set satisfying $N≦1000$.
• Another $200$ points will be awarded for passing the test set without additional constraints.
### Input
The input is given from Standard Input in the following format:
$N$ $X$
### Output
Print the total length of the ray's trajectory.
### Sample Input 1
5 2
### Sample Output 1
12
Refer to the image in the Problem Statement section. The total length of the trajectory is $2+3+2+2+1+1+1 = 12$.
|
2022-12-05 04:35:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7825336456298828, "perplexity": 535.1267714418796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711003.56/warc/CC-MAIN-20221205032447-20221205062447-00676.warc.gz"}
|
https://www.esaral.com/q/find-the-values-of-x-in-each-of-the-following-40419
|
Deepak Scored 45->99%ile with Bounce Back Crack Course. You can do it too!
# Find the values of x in each of the following
Question:
Find the values of x in each of the following
Solution:
We have
$2^{5 x} \div 2^{x}=\sqrt[5]{\left(2^{20}\right)}$
$=\frac{2^{5 x}}{2^{x}}=\left(2^{20}\right)^{\frac{1}{5}}$
$=2^{5 x-x}=2^{20 \times \frac{1}{5}}$
$=2^{4 x}=2^{4}$
= 4x = 4 [On equating exponent]
x = 1
Hence the value of x is 1
(ii) $\left(2^{3}\right)^{4}=\left(2^{2}\right)^{x}$
We have
$\left(2^{3}\right)^{4}=\left(2^{2}\right)^{x}$
$=2^{3 \times 4}=2^{2 \times x}$
12 = 2x
2x = 12 [On equating exponents]
x = 6
Hence the value of x is 6
(iii) $(3 / 5)^{x}(5 / 3)^{2 x}=125 / 27$
We have
$(3 / 5)^{x}(5 / 3)^{2 x}=125 / 27$
$\Rightarrow \frac{(3)^{x}}{(5)^{x}} \frac{(5)^{2 x}}{(3)^{2 x}}=\frac{5^{3}}{3^{3}}$
$\Rightarrow 5^{2 x-x} / 3^{2 x-x}=5^{3} / 3^{3}$
$\Rightarrow 5^{x} / 3^{x}=5^{3} / 3^{3}$
$\Rightarrow(5 / 3)^{x}=(5 / 3)^{3}$
x = 3 [on equating exponents]
Hence the value of x is 3
(iv) $5^{x-2} \times 3^{2 x-3}=135$
We have,
$5^{x-2} \times 3^{2 x-3}=135$
$\Rightarrow 5^{x-2} \times 3^{2 x-3}=5 \times 27$
$\Rightarrow 5^{x-2} \times 3^{2 x-3}=5^{1} \times 3^{3}$
⇒ x − 2 = 1, 2x − 3 = 3 [On equating exponents]
⇒ x = 2 + 1, 2x = 3 + 3
⇒ x = 3, 2x = 6
⇒ x = 3
Hence the value of x is 3
(v) $2^{x-7} \times 5^{x-4}=1250$
We have
$2^{x-7} \times 5^{x-4}=1250$
$\Rightarrow 2^{x-7} \times 5^{x-4}=2 \times 625$
$\Rightarrow 2^{x-7} \times 5^{x-4}=2 \times 5^{4}$
⇒ x − 7 = 1
⇒ x = 8, x − 4 = 4
⇒ x = 8
Hence the value of x is 8
(vi) $(\sqrt[3]{4})^{2 x+\frac{1}{2}}=\frac{1}{32}$
$\left(4^{\frac{1}{3}}\right)^{2 x+\frac{1}{2}}=\frac{1}{32}$
$(4)^{\frac{1}{3}\left(2 x+\frac{1}{2}\right)}=\frac{1}{32}$
$(4)^{\frac{1}{3}(2 x+12)}=\frac{1}{2^{5}}$
$(4)^{\frac{2}{3} x+\frac{1}{6}}=\frac{1}{2^{5}}$
$\left(2^{2}\right)^{\frac{2}{3} x+\frac{1}{6}}=\frac{1}{2^{5}}$
(2) $^{2\left(\frac{2}{3} x+\frac{1}{6}\right)}=\frac{1}{2^{5}}$
$(2)^{\frac{4}{3} x+\frac{2}{6}}=\frac{1}{2^{5}}$
$(2)^{\frac{4}{3} x+\frac{1}{3}}=2^{-5}$
$\frac{4}{3} x+\frac{1}{3}=-5$
4x + 1 = -15
4x = -15 - 1
4x = -16
x = (-16)/4
x = - 4
Hence the value of x is 4
(vii) $5^{2 x+3}=1$
$5^{2 x+3}=1 \times 5^{0}$
2x + 3 = 0 [By equating exponents]
2x = −3
x = −3/2
Hence the value of x is −3/2
(viii) $(13)^{\sqrt{x}}=4^{4}-3^{4}-6$
$(13)^{\sqrt{x}}=256-81-6$
$(13)^{\sqrt{x}}=256-87$
$(13)^{\sqrt{x}}=169$
$(13)^{\sqrt{x}}=13^{2}$
$\sqrt{x}=2$ [By equating exponents]
$(\sqrt{x})^{2}=(2)^{2}$
x = 4
Hence the value of x is 4
(ix) $\left(\sqrt{\frac{3}{5}}\right)^{x+1}=\frac{125}{27}$
$\left(\sqrt{\frac{3}{5}}\right)^{x+1}=\frac{5^{3}}{3^{3}}$
$\left(\sqrt{\frac{3}{5}}\right)^{x+1}=\left(\frac{5}{3}\right)^{3}$
$\left(\sqrt{\frac{3}{5}}\right)^{x+1}=\left(\frac{3}{5}\right)^{-3}$
$\left(\frac{3}{5}\right)^{\frac{1}{2}(x+1)}=\left(\frac{3}{5}\right)^{-3}$
$\frac{1}{2}(x+1)=-3$
x + 1 = - 6
x = - 6 - 1
x = -7
Hence the value of x is 7
|
2023-01-28 07:59:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6210690140724182, "perplexity": 2466.0450906918345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499524.28/warc/CC-MAIN-20230128054815-20230128084815-00797.warc.gz"}
|
https://www.albert.io/ie/ap-statistics/sample-distributions-central-limit-theorem
|
Free Version
Moderate
# Sample Distributions: Central Limit Theorem
APSTAT-EVPKUT
A known population has a highly skewed distribution. You take all possible samples of size $8$ from this skewed distribution.
What does the central limit theorem say about the shape of your sampling distribution?
A
The sampling distribution will be approximately normal.
B
The sampling distribution will be exactly normal.
C
The sampling distribution will be skewed, but less skewed than the population.
D
The sampling distribution will be skewed to the same degree as the population.
E
Unable to determine the shape of the sampling distribution due to the small sample size.
|
2017-02-25 18:07:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6570384502410889, "perplexity": 655.184470096881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00564-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2921626/nearest-with-respect-to-weights-symmetric-positive-semidefinite-matrix
|
# Nearest (with respect to weights) symmetric positive semidefinite matrix
I want to compute the nearest symmetric positive semidefinite matrix, similar as Higham did. But here also weights (given by an inverse co-variance matrix) should be taken into account.
So the problem is as follows: Find the vector $x$ by minimizing $$(a-x)' \cdot M\cdot(a-x)$$ subject to $X$ being Hermitian/symetric and positive (semi-)definite. And with $x=vec(X)$ and $a=vec(A)$. $M$ is the inverse of a covariance-matrix, i.e. it's Hermitian/symetric and positive definite, too. $A$ and $M$ are given.
As said above, with no special weights ($M=I$, identity matrix), the problem can be solved according to Higham by setting $X$ to $A$ with all negative eigenvalues of $A$ set to $0$. I already found a numerical stable solution but I'm wondering if there is also an algebraic solution.
Any ideas? Thank you
• The mentioned simplified Higham-solution works, since $A$ is also Hermitian/symmetric (but with some negative eigenvalues). The only way I see to formally relate $a$ to $A$ is by using the properties of the Kronecker-product, i.e. $AXB=C <-> (B' \otimes A) vec(X)=vec(C)$. – Matthias Wurm Sep 19 '18 at 6:17
|
2019-06-16 16:33:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9186519980430603, "perplexity": 370.2169278577655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998288.34/warc/CC-MAIN-20190616162745-20190616184745-00491.warc.gz"}
|
https://www.snapxam.com/problems/57910727/derivative-of-x-33-ln-x-1-
|
# Find the higher order derivative of x^33+ln(x+1)
## \frac{d^2}{dx^2}\left(x^{33}+\ln\left(x+1\right)\right)
Go!
1
2
3
4
5
6
7
8
9
0
x
y
(◻)
◻/◻
2
e
π
ln
log
lim
d/dx
d/dx
>
<
>=
<=
sin
cos
tan
cot
sec
csc
asin
acos
atan
acot
asec
acsc
sinh
cosh
tanh
coth
sech
csch
asinh
acosh
atanh
acoth
asech
acsch
$cosh\left(x\right)+1056x^{31}$
## Step by step solution
Problem
$\frac{d^2}{dx^2}\left(x^{33}+\ln\left(x+1\right)\right)$
1
Rewriting the high order derivative
$\frac{d^{\left(2-1\right)}}{dx^{\left(2-1\right)}}\left(\frac{d}{dx}\left(\ln\left(1+x\right)+x^{33}\right)\right)$
2
The derivative of a sum of two functions is the sum of the derivatives of each function
$\frac{d^{\left(2-1\right)}}{dx^{\left(2-1\right)}}\left(\frac{d}{dx}\left(\ln\left(1+x\right)\right)+\frac{d}{dx}\left(x^{33}\right)\right)$
3
The power rule for differentiation states that if $n$ is a real number and $f(x) = x^n$, then $f'(x) = nx^{n-1}$
$\frac{d^{\left(2-1\right)}}{dx^{\left(2-1\right)}}\left(\frac{d}{dx}\left(\ln\left(1+x\right)\right)+33x^{32}\right)$
4
The derivative of the natural logarithm of a function is equal to the derivative of the function divided by that function. If $f(x)=ln\:a$ (where $a$ is a function of $x$), then $\displaystyle f'(x)=\frac{a'}{a}$
$\frac{d^{\left(2-1\right)}}{dx^{\left(2-1\right)}}\left(\frac{1}{1+x}\cdot\frac{d}{dx}\left(1+x\right)+33x^{32}\right)$
5
The derivative of a sum of two functions is the sum of the derivatives of each function
$\frac{d^{\left(2-1\right)}}{dx^{\left(2-1\right)}}\left(\frac{1}{1+x}\left(\frac{d}{dx}\left(1\right)+\frac{d}{dx}\left(x\right)\right)+33x^{32}\right)$
6
The derivative of the constant function is equal to zero
$\frac{d^{\left(2-1\right)}}{dx^{\left(2-1\right)}}\left(\frac{1}{1+x}\left(0+\frac{d}{dx}\left(x\right)\right)+33x^{32}\right)$
7
The derivative of the linear function is equal to $1$
$\frac{d^{\left(2-1\right)}}{dx^{\left(2-1\right)}}\left(\left(0+1\right)\left(\frac{1}{1+x}\right)+33x^{32}\right)$
8
Subtract the values $2$ and $-1$
$\frac{d^{1}}{dx^{1}}\left(\left(0+1\right)\left(\frac{1}{1+x}\right)+33x^{32}\right)$
9
Add the values $1$ and $0$
$\frac{d^{1}}{dx^{1}}\left(1\left(\frac{1}{1+x}\right)+33x^{32}\right)$
10
Any expression to the power of $1$ is equal to that same expression
$\frac{d}{dx}\left(1\left(\frac{1}{1+x}\right)+33x^{32}\right)$
11
Any expression multiplied by $1$ is equal to itself
$\frac{d}{dx}\left(\frac{1}{1+x}+33x^{32}\right)$
12
The derivative of a sum of two functions is the sum of the derivatives of each function
$\frac{d}{dx}\left(\frac{1}{1+x}\right)+\frac{d}{dx}\left(33x^{32}\right)$
13
The derivative of a function multiplied by a constant is equal to the constant times the derivative of the function
$\frac{d}{dx}\left(\frac{1}{1+x}\right)+33\frac{d}{dx}\left(x^{32}\right)$
14
The power rule for differentiation states that if $n$ is a real number and $f(x) = x^n$, then $f'(x) = nx^{n-1}$
$\frac{d}{dx}\left(\frac{1}{1+x}\right)+33\cdot 32x^{31}$
15
Taking the derivative of the hyperbolic sine
$\frac{d}{dx}\left(x\right)cosh\left(x\right)+33\cdot 32x^{31}$
16
The derivative of the linear function is equal to $1$
$1cosh\left(x\right)+33\cdot 32x^{31}$
17
Multiply $32$ times $33$
$1cosh\left(x\right)+1056x^{31}$
18
Any expression multiplied by $1$ is equal to itself
$cosh\left(x\right)+1056x^{31}$
$cosh\left(x\right)+1056x^{31}$
### Main topic:
Differential calculus
0.23 seconds
92
|
2018-11-18 14:08:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9076887369155884, "perplexity": 235.25560795518922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744381.73/warc/CC-MAIN-20181118135147-20181118161147-00363.warc.gz"}
|
https://kavigupta.org/2015/12/10/Infsabot-Strategy-Part-2/
|
# Infsabot Strategy Part 2
OK, so to continue our Infsaboting.
### Correction
I made a mistake last time. I included SwitchInt, a way to switch on values of ExprDir, which is ridiculous since an ExprDir can only be constructed as a constant or an if branch to begin with.
So just imagine that I never did that.
## Guess and Check
OK, so how should our AI construct these syntax trees? At a very high level, we want to be able to 1) assess trees against collections of trees and 2) modify them randomly. We can represent this as a pair of types:
asses :: [RP] -> RP -> Double
modify :: RP -> StdGen -> (RP, StdGen)
I think the implementation of asses should be pretty clear: simply find the win rate (given that we can simulate an unlimited number of games).
Modify, on the other hand, is a little more complicated. There are a few ways a tree can be modified:
• Modify the value of a constant to be something else.
• Modify the value of a field to point to something else or a field
• Add leaves to the tree
• Remove leaves as a simplification step
It might seem as if some of these factors might be aided by adding a field to everything that tracks typical values of the course of the application at different points within the tree. In this way, one can tell how big a difference between two strategies are. We can actually pretty easily keep track of this separately by recording a list of KnownStates from our simulations. For now, we’ll just keep of all states or some random subset.
type HistoricalStates = [KnownState]
## Shopping List for Functions
Overall, we want to generate simpler strategies. But what do we mean by “simple”? Well, we can define complexity as the number of leaves of an expression tree:
complexity :: RP -> Int
We might also want to be able to check what happens with small changes to individual parameters. To do this, we return a vector of slightly modified values where each parameter is a dimension.
getDeltas :: Ratio -> RP -> [RP]
We also want to be able to apply perturbations to an RP:
applyDeltas :: [Ratio Int] -> RP -> RP
Using these two techniques we can find a local maximum by something akin to Euler’s method for differential equations.
We also want to be able to switch between constants and parameters that make sense given context easily. For this, we can provide a typical set of KnownStates that can be used to find some parameter that would be similar and substitute it in. For simplicity, we’ll only change one at a time.
constantToParameter :: HistoricalStates -> RP -> StdGen -> (RP, StdGen)
This is a little out of order, but simplification should work in a similar manner, replacing complex sections of tree with constants or parameters.
simplify :: HistoricalStates -> RP -> StdGen -> (RP, StdGen)
In fact, simplify can be seen as a more general form of constantToParameter; we can simply require some threshold determining whether or not a parameter or constant is “close enough” to be considered the same as a tree or not.
data GeneratorParameters = GeneratorParameters {
closeEnoughThreshold :: Ratio Int
}
simplify :: GeneratorParameters -> HistoricalStates -> RP -> StdGen -> (RP, StdGen)
Finally, we want the ability to complicate a given expression tree.
This can be accomplished most atomically by defining a function complicate that is allowed to make structural but non-effective changes, e.g.,
• x -> 1 * x, x / 1, 0 + x, x - 0
• x -> if <some bool expression> then x else x
• b -> b && True, b || False
This will accept an integer argument of how much to complicate its tree.
complicate :: Int -> RP -> StdGen -> (RP, StdGen)
Now, in mentioning everything above, glossed over some things. For example, most parameters accept some sort of argument, so generating them requires said argument. Additionally, <some bool expression> will need to actually be filled in in the redundant branch seen above. So we need some way to randomly generate parts of a tree
randomlyGenerate :: StdGen -> (RP, StdGen)
In fact, this function is provided by the Random typeclass, so we can instead say
instance Random RP where
...
## An Expr type class
OK, so if you were looking carefully, that last type didn’t make much sense. I said we needed to be able to generate random elements of various types, but the definition I gave was specifically for RP. In fact, all the definitions I gave apply equally well to any of the various Expr* types.
Let’s therefore collate all these function definitions into a single typeclass:
class (Random a) => Expr a where
complexity :: a -> Int
getDeltas :: Ratio Int -> a -> [a]
applyDeltas :: [Ratio Int] -> a -> a
constantToParameter :: HistoricalStates -> a -> StdGen -> (a, StdGen)
simplify :: HistoricalStates -> a -> StdGen -> (a, StdGen)
complicate :: Int -> a -> StdGen -> (RP, StdGen)
OK, so that’’s it for now. I’ll cover implementation tomorrow after I typecheck this mess!
|
2019-02-16 09:15:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6067082285881042, "perplexity": 1604.2692275916077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480240.25/warc/CC-MAIN-20190216085312-20190216111312-00230.warc.gz"}
|
https://access.openupresources.org/curricula/our6-8math/en/grade-8/unit-4/lesson-6/index.html
|
# Lesson 6Strategic Solving
Let's solve linear equations like a boss.
### Learning Targets:
• I can solve linear equations in one variable.
## 6.1Equal Perimeters
The triangle and the square have equal perimeters.
1. Find the value of .
2. What is the perimeter of each of the figures?
## 6.2Predicting Solutions
Without solving, identify whether these equations have a solution that is positive, negative, or zero.
## 6.3Which Would You Rather Solve?
Here are a lot of equations:
A.
B.
C.
D.
E.
F.
G.
H.
I.
J.
1. Without solving, identify 3 equations that you think would be least difficult to solve and 3 equations you think would be most difficult to solve. Be prepared to explain your reasoning.
2. Choose 3 equations to solve. At least one should be from your "least difficult" list and one should be from your "most difficult" list.
### Are you ready for more?
Mai gave half of her brownies, and then half a brownie more, to Kiran. Then she gave half of what was left, and half a brownie more, to Tyler. That left her with one remaining brownie. How many brownies did she have to start with?
## Lesson 6 Summary
Sometimes we are asked to solve equations with a lot of things going on on each side. For example,
This equation has variables on each side, parentheses, and even a fraction to think about. Before we start distributing, let's take a closer look at the fraction on the right side. The expression is being multiplied by 3 and divided by 6, which is the same as just dividing by 2, so we can re-write the equation as
But now it’s easier to see that all the terms on the numerator of right side are divisible by 2, which means we can re-write the right side again as
At this point, we could do some distribution and then collect like terms on each side of the equation. Another choice would be to use the structure of the equation. Both the left and the right side have something being subtracted from . But, if the two sides are equal, that means the "something" being subtracted on each side must also be equal. Thinking this way, the equation can now be re-written with less terms as
Only a few steps left! But what can we tell about the solution to this problem right now? Is it positive? Negative? Zero? Well, the 2 and the 5 multiplied together are 10, so that means the 2 and the multiplied together cannot have a positive or a negative value. Finishing the steps we have:
Neither positive nor negative. Just as predicted.
## Lesson 6 Practice Problems
1. Solve each of these equations. Explain or show your reasoning.
2. Solve each equation and check your solution.
3. Elena said the equation has no solutions because is greater than . Do you agree with Elena? Explain your reasoning.
4. The table gives some sample data for two quantities, and , that are in a proportional relationship.
14 21
64
39
1
1. Complete the table.
2. Write an equation that represents the relationship between and shown in the table.
3. Graph the relationship. Use a scale for the axes that shows all the points in the table.
|
2022-08-17 16:38:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7461196780204773, "perplexity": 603.4447058833176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573029.81/warc/CC-MAIN-20220817153027-20220817183027-00726.warc.gz"}
|
https://sciencehouse.wordpress.com/2017/01/13/fake-news-and-beliefs/
|
# Fake news and beliefs
Much has been written of the role of fake news in the US presidential election. While we will never know how much it actually contributed to the outcome, as I will show below, it could certainly affect people’s beliefs. Psychology experiments have found that humans often follow Bayesian inference – the probability we assign to an event or action is updated according to Bayes rule. For example, suppose $P(T)$ is the probability we assign to whether climate change is real; $P(F) = 1-P(T)$ is our probability that climate change is false. In the Bayesian interpretation of probability, this would represent our level of belief in climate change. Given new data $D$ (e.g. news), we will update our beliefs according to
$P(T|D) = \frac{P(D|T) P(T)}{P(D)}$
What this means is that our posterior probability or belief that climate change is true given the new data, $P(T|D)$, is equal to the probability that the new data came from our internal model of a world with climate change (i.e. our likelihood), $P(D|T),$ multiplied by our prior probability that climate change is real, $P(T),$ divided by the probability of obtaining such data in all possible worlds, $P(D)$. According to the rules of probability, the latter is given by $P(D) = P(D|T)P(T) + P(D|F)P(F)$, which is the sum of the probability the data came from a world with climate change and that from one without.
This update rule can reveal what will happen in the presence of new data including fake news. The first thing to notice is that if $P(T)$ is zero, then there is no update. In this binary case, this means that if we believe that climate change is absolutely false or true then no data will change our mind. In the case of multiple outcomes, any outcome with zero prior (has no support) will never change. So if we have very specific priors, fake news is not having an impact because no news is having an impact. If we have nonzero priors for both true and false then if the data is more likely from our true model then our posterior for true will increase and vice versa. Our posteriors will tend towards the direction of the data and thus fake news could have a real impact.
For example, suppose we have an internal model where we expect the mean annual temperature to be 10 degrees Celsius with a standard deviation of 3 degrees if there is no climate change and a mean of 13 degrees with climate change. Thus if the reported data is mostly centered around 13 degrees then our belief of climate change will increase and if it is mostly centered around 10 degrees then it will decrease. However, if we get data that is spread uniformly over a wide range then both models could be equally likely and we would get no update. Mathematically, this is expressed as – if $P(D|T)=P(D|F)$ then $P(D) = P(D|T)(P(T)+P(F))= P(D|T)$. From the Bayesian update rule, the posterior will be identical to the prior. In a world of lots of misleading data, there is no update. Thus, obfuscation and sowing confusion is a very good strategy for preventing updates of priors. You don’t need to refute data, just provide fake examples and bury the data in a sea of noise.
## 6 thoughts on “Fake news and beliefs”
1. 3 comments inspired by caffeine, etc.)
1. Interesting and relevant topic—one i only vaguely comprehend especially in my current state. . To me Bayes theorem reduces to p(T,D)=p(T,D). Just cancel some same terms on both sides of the equation. —-change conditional probabilities to joint ones—so I dont really get the point.
(If one includes ‘time’ then P (T(t’)/D(t)) may not = P(D(t’)/P(t)) . i use / for I here–no latex. )
I also tend to think in terms of venn diagrams.
In a paper by A Caticha on bayesianism (on his website) he gives this example.
The planet mars is known to have water on it.
All planets with life on them have water on them.
Given that we know mars has water, what is the probability it has life on it.
I draw 3 circles—one big circle which is all the planets P. A smaller one inside that one which is all planets with water W. And a smaller one inside that one which is all the planets with life on them L.
So probability of life on mars is the size of the smallest circle divided by the size of the second biggest circle = L/W. . The size of the circles are guesstimates or probabilities.
He says frequentism can’t describe this as well as bayes, but i dont see the difference. He also says one can’t use frequentism because there is no data about how common life is on planets of universe.
I think there is some data (guesstimates) about how many planets P there are and how many have water W; life has one data point— so probability now is = 1/W. I guess Bostrom and F Dyson would have different estimates for L. (and if there say silicon based life one needs a different venn diagram to find L.
.
I have seen an article describing how one converts arguments using bayes theorem into the language of venn diagrams (which to me represent frequentist interpretation)—basically i remember it as being a simple excercize..
2. Given my difficulty understanding your argument, and i have a science degree, i wonder what the probability the US population or certain subgroups of it can understand your argument. (Nate Silver’s 538 says it uses bayeisianism, and they got the election wrong. A simple algorithm by someone at AU got it right.) Propoganda or noise (eg Hitler, Lippman and Bernay on ‘manufacturing consent’ and public relations) likely can trump bayesian analyses—its the addictive mental equivalent of sugar.
‘Mass hysteria’ ,or confusion, or psychosis may be like the obesity or opiate epidemics.
I’ve come across alot of people who believe in ‘conspiracy theories’—eg fake news (Bush was responsible for 911, AIDS is not caused by a virus or sex but just bad diet, loch ness monster and yetis exist, JFK and other assasination theories. …—a new one is that Trump actually wasa the one who leaked that 35 page article about sex scandals in russia). There is so much news out there its impossible to fact check and process.
The same is true of science literature—-I am attracted often mostly from a psycho-social view to ‘crank’, dissident, or woo scientific theories. (e.g. I have read alot of arguments both pro and con GMO food, vaccines, and AIDS and cancer treatments.) Peer review is supposed to sort this out, but many fields basically polarize into camps, groups or schools (eg chomskytes vs connectionists in linguistics, group selectionists vs others in biology, and still a few ‘local hidden variable’ physicists vs standard views, views of the new ‘entropic gravity’ theory—-S weinberg is even questioning quantum theory) with PhDs with good CVs on both sides. The son of a friend is big into alt-right conspiracies, thinks Sororos is the devil…I saw a music show at Planet Ping Pong which was hit by another theory.
The academics who write for the ‘ genetic literacy project’ which promotes GMOs are in constant ‘feuds’ with academics who write for ‘science for the people’, ‘council for responsible genetics’, etc. (Same thing for IQ and race issues.) Its gets more complex when one sees ‘conflict of interests’—eg koch brother funding, etc. Anti-psychiatrists dought the value of psychotropic drugs. Many people don’t trust experts sometimes for good reason (eg maybe war on Iraq, or bank bailout in 2008.)..
One sees this in global warming issue as well. Given the cold in DC recently, and the floods in drought stricken california, even i’m doing my bayesian updating for probability of anthropogenic global warming
How is anyone to know what is true, especially given cultural indoctrination, educational system, finite time and cognitive energy..
A local economist who uses bayesian methodology wrote a paper ‘information dynamics’ looking at social flows of knowledge.
3. To me this is a general issue of pattern recognition—eg can one tell if a series is random or generated by a deterministic chaotic system (classic paper in ergodic thery on this is by Ornstein and Weiss in Bull Am Math Soc in 1991).There are all kinds of ‘illusions’ or spurious correlations colllected on the blog replicated typo and in wikipedia—hot hand fallacy of Kahneman and Tversy (which likely isnt—they assumed basketball throws are as statistically independent as coin flipping which i highly dought—humans arent coins); ‘illusion of randomness’ (a great one), etc.
Slutsky-Yule theorem of 1930’s suggested in one interpretation that economic cycles were basically artifacts of data mining and aggregation—so there was actually no great depression except to economists.
.
Like
2. p.s. The slutsky-yule theorem was used to dispute ‘hockey stick’ graph on AGW—said it was just actually random with no trend. This was written up in Am Scientist and R Mueller of UCB physics promoted it for awhile but recanted. It also came up in discussions of stuff by Scafetta of Duke U physics who also basically disputes the climate consensus—he fits climate data with higher order polynomials so he can get any trend he wants. .
Like
3. p.s.2 I think i understand this.
One of my problems is ‘vocabulary’ — i never took probability theory, so ‘the prior’, likelhood, and posterior are foreign languages.
I studied standard markov (and non-markov) processes which have the same math (eg master equation, or path integral, or CKS equation). (I read some E T Jaynes papers but except for a few cases saw nothing much new, memorable or interesting in them. I’m not even sure if he solved any new problems—his discussion of maximum entropy applied to economics didn’t seem to have any content. Just a different dialect. )
There is also the issue of ‘order’ or ‘presentation’. This is the exact same issue as to why in schools they now put fruits and vegetables first, before french fries and ice cream—also called path dependence. Depending on the order presented, people develop ‘priors’ or theories. If given french fries first, people may decide as a prior that french fries make them feel good, and later reject any conflicting evidence. Same with say criminal behavior. There is a wide distribution of outcomes—data,
Once you have a prior (or say prejudice) any other evidence may be interpreted as noise. Just as with language learning or addiction, once you get a prior language or habit you no longer will update your ‘prior’. Humans are not infinitely malleable or plastic. (There are so many new computer languages —python, R, etc. —and i never learned any except some fairly basic C++. since i basically didn’t want to be in front of a computer all day—so i don’t update my prior. What is the reason to think python should be learned? One reason was explained to me by my niece who is going to MIT—thats how you get skills so you can get a job at facebook. I am much more in front of a computer than i should or want to be. Alot of computer stuff for me is unpaid gossip on Facebook, fact checking or education at basically elementary school level but for adults who are too busy self-promoting their own half baked ideas that they dont have time or interest in doing any research).
Some people get educated, learn python, and get big money working for facebook, while others spread fake news and ignorance on facebook. Standard Lotka-volterra dynamics. Sense and nonsense, like DNA, may be codependent. The more fake news, the higher goes the stock market.
This is why some people develop their own schools and curricula. (I know people who have done this, and i taught for awhile in an alternative school—and sort of stupidly quit that job over a semi-minor conflict).
There are like 7 B people in world—and same number of ‘priors’. There is new data or information everyday, but it will be filtered.
Like
4. […] is to one. Once the likelihood becomes insensitive to data then we are in the same situation as before. Technology alone, in the absence of fake news, could lead to a world where no one ever changes […]
Like
5. Could you provide some references regarding which psychological experiments have “found that humans often follow Bayesian inference”? Sounds overly simplistic so I am skeptical.
Like
|
2018-01-21 18:47:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.397339403629303, "perplexity": 1739.8710095463478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890823.81/warc/CC-MAIN-20180121175418-20180121195418-00602.warc.gz"}
|
https://www.projecteuclid.org/euclid.aaa/1412606640
|
## Abstract and Applied Analysis
### Optimal Kalman Filtering for a Class of State Delay Systems with Randomly Multiple Sensor Delays
#### Abstract
The optimal Kalman filtering problem is investigated for a class of discrete state delay stochastic systems with randomly multiple sensor delays. The phenomenon of measurement delay occurs in a random way and the delay rate for each sensor is described by a Bernoulli distributed random variable with known conditional probability. Based on the innovative analysis approach and recursive projection formula, a new linear optimal filter is designed such that, for the state delay and randomly multiple sensor delays with different delay rates, the filtering error is minimized in the sense of mean square and the filter gain is designed by solving the recursive matrix equation. Finally, a simulation example is given to illustrate the feasibility and effectiveness of the proposed filtering scheme.
#### Article information
Source
Abstr. Appl. Anal., Volume 2014 (2014), Article ID 716716, 10 pages.
Dates
First available in Project Euclid: 6 October 2014
https://projecteuclid.org/euclid.aaa/1412606640
Digital Object Identifier
doi:10.1155/2014/716716
Mathematical Reviews number (MathSciNet)
MR3200802
Zentralblatt MATH identifier
07022936
#### Citation
Chen, Dongyan; Xu, Long. Optimal Kalman Filtering for a Class of State Delay Systems with Randomly Multiple Sensor Delays. Abstr. Appl. Anal. 2014 (2014), Article ID 716716, 10 pages. doi:10.1155/2014/716716. https://projecteuclid.org/euclid.aaa/1412606640
#### References
• H. R. Karimi, M. Zapateiro, and N. Luo, “A linear matrix inequality approach to robust fault detection filter design of linear systems with mixed time-varying delays and nonlinear perturbations,” Journal of the Franklin Institute, vol. 347, no. 6, pp. 957–973, 2010.
• J. Hu, Z. Wang, H. Gao, and L. K. Stergioulas, “Extended Kalman filtering with stochastic nonlinearities and multiple missing measurements,” Automatica, vol. 48, no. 9, pp. 2007–2015, 2012.
• Z. Wang, B. Shen, and X. Liu, “${H}_{\infty }$ filtering with randomly occurring sensor saturations and missing measurements,” Automatica, vol. 48, no. 3, pp. 556–562, 2012.
• Y. Liu, J. Suo, H. R. Karimi, and X. Liu, “A filtering algorithm for maneuvering target tracking based on smoothing spline fitting,” Abstract and Applied Analysis, vol. 2014, Article ID 127643, 6 pages, 2014.
• Z. Wang, X. Liu, Y. Liu, J. Liang, and V. Vinciotti, “An extended Kalman filtering approach to modeling nonlinear dynamic gene regulatory networks via short gene expression time series,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 6, no. 3, pp. 410–419, 2009.
• J. Hu, Z. Wang, H. Dong, and H. Gao, “Recent advances on recursive filtering and sliding mode design for networked nonlinear stochastic systems: a survey,” Mathematical Problems in Engineering, vol. 2013, Article ID 646059, 12 pages, 2013.
• H. Ahmada and T. Namerikawa, “Extended Kalman filter-based mobile robot localization with intermittent measurements,” Systems Science and Control Engineering: An Open Access Journal, vol. 1, no. 1, pp. 113–126, 2013.
• X. Kan, H. Shu, and Y. Che, “Asymptotic parameter estimation for a class of linear stochastic systems using Kalman-Bucy filtering,” Mathematical Problems in Engineering, vol. 2012, Article ID 342705, 15 pages, 2012.
• X. Kan and Z. Wang, “State estimation for discrete-time delayed neural networks with fractional uncertainties and sensor saturations,” Neurocomputing, vol. 117, pp. 64–71, 2013.
• S. Yin, H. Luo, and S. Ding, “Real-time implementation of fault-tolerant control systems with performance optimization,” IEEE Transactions on Industrial Electronics, vol. 61, no. 5, pp. 2402–2411, 2014.
• S. Yin, S. Ding, A. Haghani, H. Hao, and P. Zhang, “A comparison study of basic data-driven fault diagnosis and process monitoring methods on the benchmark Tennessee Eastman process,” Journal of Process Control, vol. 22, no. 9, pp. 1567–1581, 2012.
• P. Shi, Y. Xia, G. P. Liu, and D. Rees, “On designing of sliding-mode control for stochastic jump systems,” IEEE Transactions on Automatic Control, vol. 51, no. 1, pp. 97–103, 2006.
• R. E. Kalman, “A new approach to linear filtering and prediction problems,” Transactions of the ASME Journal of the Basic Engineering, vol. 82, pp. 35–45, 1960.
• J. Hu, Z. Wang, B. Shen, and H. Gao, “Quantised recursive filtering for a class of nonlinear systems with multiplicative noises and missing measurements,” International Journal of Control, vol. 86, no. 4, pp. 650–663, 2013.
• J. Hu, Z. Wang, and H. Gao, “Recursive filtering with random parameter matrices, multiple fading measurements and correlated noises,” Automatica, vol. 49, no. 11, pp. 3440–3448, 2013.
• C. Yang, Z. Yu, P. Wang, Z. Yu, H. R. Karimi, and Z. Feng, “Robust ${l}_{2}$-${l}_{\infty }$ filtering for discrete-time delay systems,” Mathematical Problems in Engineering, vol. 2013, Article ID 408941, 10 pages, 2013.
• X. Lu, H. Zhang, W. Wang, and K. L. Teo, “Kalman filtering for multiple time-delay systems,” Automatica, vol. 41, no. 8, pp. 1455–1461, 2005.
• J. Hu, Z. Wang, Y. Niu, and L. K. Stergioulas, “${H}_{\infty }$ sliding mode observer design for a class of nonlinear discrete time-delay systems: a delay-fractioning approach,” International Journal of Robust and Nonlinear Control, vol. 22, no. 16, pp. 1806–1826, 2012.
• T. Kaiath and A. H. Sayed, Linear Estimation, Prentice Hall, Upper Saddle River, NJ, USA, 2000.
• B. Chen, L. Yu, and W. Zhang, “Optimal filtering for linear discrete state delay systems,” in Proceedings of the 29th Chinese Control Conference, pp. 1641–1645, Beijing, China, July 2010.
• B. Chen, L. Yu, and W. A. Zhang, “Optimal filtering for linear discrete state delay systems under uncertain observations,” Journal of Systems Science and Mathematical Sciences, vol. 30, no. 6, pp. 782–791, 2010.
• B. Chen, L. Yu, and W. A. Zhang, “Robust Kalman filtering for uncertain state delay systems with random observation delays and missing measurements,” IET Control Theory & Applications, vol. 5, no. 17, pp. 1945–1954, 2011.
• J. Liang, B. Shen, H. Dong, and J. Lam, “Robust distributed state estimation for sensor networks with multiple stochastic communication delays,” International Journal of Systems Science, vol. 42, no. 9, pp. 1459–1471, 2011.
• J. Hu, D. Chen, and J. Du, “State estimation for a class of discrete nonlinear systems with randomly occurring uncertainties and distributed sensor delays,” International Journal of General Systems, vol. 43, no. 3-4, pp. 387–401, 2014.
• S. Elmadssia, K. Saadaoui, and M. Benrejeb, “New delay-dependent stability conditions for linear systems with delay,” Systems Science and Control Engineering: An Open Access Journal, vol. 1, no. 1, pp. 2–11, 2013.
• G. Wei, L. Wang, and F. Han, “A gain-scheduled approach to fault-tolerant control for discrete-time stochastic delayed systems with randomly occurring actuator faults,” Systems Science and Control Engineering: An Open Access Journal, vol. 1, no. 1, pp. 82–90, 2013.
• J. Ma and S. Sun, “Optimal linear estimators for systems with random sensor delays, multiple packet dropouts and uncertain observations,” IEEE Transactions on Signal Processing, vol. 59, no. 11, pp. 5181–5192, 2011.
• B. Shen, Z. Wang, H. Shu, and G. Wei, “${H}_{\infty }$ filtering for nonlinear discrete-time stochastic systems with randomly varying sensor delays,” Automatica, vol. 45, no. 4, pp. 1032–1037, 2009.
• Z. Wang, H. Dong, B. Shen, and H. Gao, “Finite-horizon ${H}_{\infty }$ filtering with missing measurements and quantization effects,” IEEE Transactions on Automatic Control, vol. 58, no. 7, pp. 1707–1718, 2013.
• S. L. Sun, “Linear minimum variance estimators for systems with bounded random measurement delays and packet dropouts,” Signal Processing, vol. 89, no. 7, pp. 1457–1466, 2009.
• L. Schenato, “Optimal estimation in networked control systems subject to random delay and packet drop,” IEEE Transactions on Automatic Control, vol. 53, no. 5, pp. 1311–1317, 2008.
• M. Moayedi, Y. K. Foo, and Y. C. Soh, “Adaptive Kalman filtering in networked systems with random sensor delays, multiple packet dropouts and missing measurements,” IEEE Transactions on Signal Processing, vol. 58, no. 3, pp. 1577–1588, 2010.
• F. O. Hounkpevi and E. E. Yaz, “Minimum variance generalized state estimators for multiple sensors with different delay rates,” Signal Processing, vol. 87, no. 4, pp. 602–613, 2007.
• J. Hu, Z. Wang, B. Shen, and H. Gao, “Gain-constrained recursive filtering with stochastic nonlinearities and probabilistic sensor delays,” IEEE Transactions on Signal Processing, vol. 61, no. 5, pp. 1230–1238, 2013.
• R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press, New York, NY, USA, 1991.
• S. Yin, G. Wang, and H. R. Karimi, “Data-driven design of robust fault detection system for wind turbines,” Mechatronics, 2013.
• S. Yin, S. X. Ding, A. H. A. Sari, and H. Hao, “Data-driven monitoring for stochastic systems and its application on batch process,” International Journal of Systems Science, vol. 44, no. 7, pp. 1366–1376, 2013.
• H. R. Karimi, P. J. Maralani, B. Lohmann, and B. Moshiri, “${H}_{\infty }$ control of parameter-dependent state-delayed systems using polynomial parameter-dependent quadratic functions,” International Journal of Control, vol. 78, no. 4, pp. 254–263, 2005.
• J. Hu, Z. Wang, H. Gao, and L. K. Stergioulas, “Probability-guaranteed ${H}_{\infty }$ finite-horizon filtering for a class of nonlinear time-varying systems with sensor saturations,” Systems & Control Letters, vol. 61, no. 4, pp. 477–484, 2012.
• H. R. Karimi, “Robust synchronization and fault detection of uncertain master-slave systems with mixed time-varying delays and nonlinear perturbations,” International Journal of Control, Automation and Systems, vol. 9, no. 4, pp. 671–680, 2011.
• Z. Wang, D. Ding, H. Dong, and H. Shu, “${H}_{\infty }$ consensus control for multi-agent systems with missing measurements: the finite-horizon case,” Systems & Control Letters, vol. 62, no. 10, pp. 827–836, 2013.
• H. R. Karimi, “Robust ${H}_{\infty }$ filter design for uncertain linear systems over network with network-induced delays and output quantization,” Modeling, Identification and Control, vol. 30, no. 1, pp. 27–37, 2009. \endinput
|
2019-12-07 21:27:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.306745320558548, "perplexity": 3014.434278974808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540502120.37/warc/CC-MAIN-20191207210620-20191207234620-00463.warc.gz"}
|
https://rpg.meta.stackexchange.com/questions/6445/is-it-ok-to-downvote-for-low-effort-when-the-only-way-to-answer-quickly-is-pir/6448
|
Is it OK to downvote for “low effort” when the only way to answer quickly is piracy?
My question https://rpg.stackexchange.com/questions/86316/is-there-a-list-of-cleric-and-wizard-spells-that-deal-lightning-or-thunder-damag?noredirect=1&lq=1 was downvoted, with a comment explaining that the −1 was for low effort.
Is it ok to downvote a question for low effort if the only way the question is low effort is when using pirated material?
More to the point, if the correct answer would require someone to read (let's say) five books with more than a thousand pages in total, isn't it a valid question to ask, even if a pirate can steal and digitally search materials to produce the answer within a short time?
Downvoting for low effort would seem to say that I should have pirated the material to solve my problem, and I don't think that's a base assumption that rpg.se should make.
• Who said? Where in chat? Could you provide a link so others can figure out what's going on? (You can get a link to a chat message by clicking on the down-arrow-menu on the left side of a message when you hover over it, and choosing “permalink”.) Aug 7 '16 at 19:29
• But then I do exactly what made mxy force-delete my last question. It's not relevant, everything that matters is in this question. I will rephrase it a bit.
– Mala
Aug 7 '16 at 19:30
• *sigh* I'm not going to play this game with you. Closed as unclear; you're welcome to clarify it, or not. Aug 7 '16 at 19:31
• The question doesn't make sense, which is why context is needed. We don't solve hypothetical, vague, or rumoured problems on meta; we discuss actual things that are actually happening, and we provide all context necessary so that everyone can understand and participate equally, in the open. Aug 7 '16 at 19:39
• I'm unfamiliar with 5e publishing, but a quick search of our own site revealed links to at least two fully legal digital sources for some or all of the PHB's spells (albeit behind registration or pay walls so I couldn't check their completeness), and someone in chat found website with all the spells which looks dubious to my own eye but claims to be in compliance with the WotC Fan Site guidelines. Absolutist claims that piracy is necessary to search 5e spells digitally seem weak in the face of this.
– BESW
Aug 7 '16 at 20:23
• @DanHenderson Mxyzplk took no issue with posting context/links here. The order was to stop harassing another user by posting inappropriate main-site questions about one of their comments, and if Mala wanted to ask an honest question about spell searching, they could do it without referring to the other user or the comment. That order to quit using main-site questions to harass that one user isn't at all relevant to providing context for this meta question to make it make sense. Aug 8 '16 at 7:00
• Having yesterday done the digging around necessary to understand this situation, I've edited the question to include the necessary context. Based on that real context, I've also edited the question to be about the real situation (it's just about one downvote reason, not about close votes or flags). I've also taken the liberty of changing most of the assertions in the question into questions—that makes the question sound more curious and less combative, making it more inviting to answer in a spirit of learning. Aug 8 '16 at 14:56
• Yes, Mala is confused between my directive to not post main site questions hassling another user about their comments and SSD's directive to post clear links/references in this meta question complaining about that. Those are two completely separate things. We've explained it a dozen times but it hasn't taken. Aug 9 '16 at 4:10
• @Mala Nope, for the reasons already explained. We just dealt with this over pages of moderation messages. We're not going to watch you try to continue that argument here. Comments doing that will be removed. Aug 9 '16 at 8:41
• Mala, being privately unwilling to accept any result other than the one you want is fine, in private, but you do need to stop repeating yourself after being told clearly and repeatedly that your demand is rejected. Repeating yourself ad nauseam is not going to change anything. Comments removed, as promised, and Officially: Knock it off. Aug 9 '16 at 20:30
A question or answer can be downvoted for any reason. In a perfect world, it should not be, but taking the risk of asking or answering means that sometimes voters dislike a question or answer for whatever reason and that reason doesn't matter. The reason can't actually matter: even if the voter gives a tasteless or counterintuitive reason for the downvote, if the voter wants that question downvoted, downvoted it will stay (patterns of system-gaming, user deletion, and so on notwithstanding). Bribes, friendly persuasion, or links to a Meta post might get the voter to reconsider, but the voter never must reconsider. In fact, often, trying to force the voter to see the error in his downvote only reinforces that younot the question—deserve the downvote. (I've seen a few questions and answers that now seem innocuous but when posted yielded comment wars (now deleted) that had folks pretty obviously downvoting the user rather than the question or the answer, for instance.)
Seemingly well-researched and clear questions and, likewise, seemingly useful answers get downvoted for worse reasons than—and I totally agree that this is a bad reason—You should just use illegally obtained sources. (Worse reasons, in my opinion, include, for example, the voter believing a question's answer so obvious that the question shouldn't've even been asked in the first place despite the asker's sincerity or the voter disagreeing with an answer's conclusion no matter how otherwise expertly presented and informative the answer is, respectively—both of which reflect of my own experiences.)
Asking and answering questions eventually yields downvotes, and most of the time the voter's anonymous and the reason's opaque. In this case, though, you were fortunate to encounter a voter who explained the reason for the downvote, and—while this is certainly no consolation—that's more feedback (even if it is terrible feedback) than most folks receive when their question or answer is downvoted.
Note: I've no horse in this race, not interesting me, but I do answer many et al. make-a-list-for-me questions, some of which receive downvotes for no apparent reason.
So, we have some material which is not legally available in electronic format and it is thus impossible to ctrl+f search it unless a lot of scanning and turning the images into text is done. That is surely a lot of work, but I don't feel it is tied to the effort as we mean it when casting downvotes.
Basically, the problem is not "how long it takes" but "how hard is it to grasp". Writing a list is more often than not just a matter of time spent parsing books, with no real judgement involved. Anybody can do it. So, despite manual searching being a bothersome task, I feel like "effort not shown" is a valid downvote reason.
Usually, effort is shown where the querent has problems understanding how a thing works despite reading the whole thing. An exception that makes a question good despite the low effort is when the material is scattered and hard to put together (the "I don't even know where to start looking" problem).
In the end, I don't think this is really important. Unlike close reasons, users are not asked to state why they downvote, and if someone downvotes because they think there's no effort, well, they think there's no effort. Are they basing this assumption because they take piracy for granted? Their problem. Should they comment about their reasons, I would personally ask them how do they think it's short. It's usually enough to have other readers notice the problem and be aware that they might not want to believe the first commenter about the perceived effort.
• If the body of material is too big, it's a big effort. Just because someone knows all this by heart and knows the lets say 10 places to look, that's expertise that's non-trivial to get. There's a reason a site like this is so useful; experts might solve something in minutes that takes many hours for novices...
– Mala
Aug 7 '16 at 17:35
• Also, I get your point, but this doesn't really answer the question. Reading a thousand pages (as per assumption in the question) is not a low or not enough effort....
– Mala
Aug 7 '16 at 17:49
• if someone downvotes because they think there's no effort, well, they think there's no effort. That point probably needs emphasis. The voting system is a core tool of all SE/SO sites, and has certain features that may or may not please some of us as users. One such feature is the anonymity feature, which does not require a comment from a down vote. Leaving comments to ask for clarity or improvement in a question or answer is encouraged, but not required. None of us gets to dictate how others vote here. Aug 9 '16 at 4:51
• @KorvinStarmast But as Seven rewrote in the question, "Downvoting for low effort would seem to say that I should have pirated the material to solve my problem, and I don't think that's a base assumption that rpg.se should make.". Do you really disagree with that statement?
– Mala
Aug 9 '16 at 18:20
• Mala, I offer you no opinion on that statement. What I offer you is the point that none of us gets to dictate how others vote. We had an extended discussion with @nvoigt last year on meta regarding voting, particularly how voting on this stack seems to be different from other stacks. So, I'll bold it in an effort to make it crystal clear: None of us gets to dictate how others vote here. That includes SSD, me, you, BESW, everyone. How voting manifests as a behavior can occasionally be frustrating, but it's a feature, not a bug, of the SE/SO sites. Aug 9 '16 at 18:43
• @Mala you make it look like you want to give Korvin no choice but to agree, otherwise he "must" be pro-piracy. I suggest you try to avoid rethorical questioning. (I, for one, disagree with the sentence you quoted because I don't think that the "would seem to say" is necessarily true.) Aug 9 '16 at 18:48
• @KorvinStarmast Indeed, it's unfortunate that these un-SE-like voting patterns are so prevalent here. I do disagree that it's a feature - it's really a bug in my eyes. Oh well. Anyway, I am far from wanting to dictate anything, far less being able to do so, so that bolded part is moot no matter what.
– Mala
Aug 9 '16 at 18:53
• @Zachiel Sorry to give that impression, but that's really the core of the question - is it low effort if the 'low' requires bad behaviour. Anyway, it's Seven's words, not mine (he quite extensively rewrote my original question).
– Mala
Aug 9 '16 at 18:54
• @Mala I do disagree that it's a feature - it's really a bug in my eyes. RPG.SE is made up of people, not bugs. ;-) Aug 9 '16 at 18:59
Yes, it's fair to downvote a post — period, full stop. Anyone can downvote for whatever reason they like.
Why are “wrong” votes allowed though?
It's a necessary feature of the site's functioning. To make the site work, we need accurate scores beside posts. To get accurate scores for posts, we need as much data as possible. To get that, the site's design and software encourages as much voting as possible of all kinds, because more votes means more accurate scores.
A consequence is that “wrong” votes (both up and down) must be allowed, because any barrier to voting would drastically decrease voting.
Fortunately, by encouraging many votes, the effect of “wrong” votes is automatically fixed: the more votes you have, the more “wrong” votes disappear into the statistical noise. What matters is what most of the votes say, not what one vote says.
So votes being at the full discretion of the voter is a necessary, non-negotiable feature of the site since, without it, the site would be really bad at the job it exists to do. It's a bug, not a feature. You get to vote however you like — so does everyone else. Enjoy the privilege, and accept that it means everyone else will too.
But still, why is the question “low effort”?
Although the basic fact is that the votes don't need to be justified at all, we can still look at why this question could be judged “low effort”.
Here is your objection, rephrased: But if it would take a lot of effort to answer a question, how can it be low effort to ask it?
Notice the difference: high effort to answer; low effort to ask. The fact that these apply to different things — the answering and asking — is the key to understanding why a question can require high effort to answer but still count as unacceptably low effort to ask.
In this case, “why is this low effort” is easily answered. The trouble with this question is that it already contains its solution, and the solution takes effort to use. The real problem you have is one that this site is not made to solve: that you just don't want to expend the effort to use the solution you already have.
That's low effort. “I know how to get the answer to my own question, but I don't want to do the work. Here is the work that needs doing; please do it for me so that I don't have to waste my own time,” is going to get downvoted for low effort.
Answering the question might be high effort, but that's not what the downvote means — the downvote is on the question. It means that the asker has not made enough effort to solve their own problem, or is asking the question to try to avoid making any effort to solve their own problem.
Asking other people to do work that you can do yourself (but don't want to) is making low effort.
More generally stated: how much effort it takes to answer a question doesn't matter; what matters is whether the question appears to exist to help the asker avoid doing work. When it does, it will naturally attract a few or many downvotes for low effort.
This is statistical though, not absolute
But keep in mind that voting is purposely statistical — what one voter thinks is low effort is not important, it's what the hypothetical “average” voter thinks that the site's voting mechanics are designed to harvest. So if some voters think a question is low effort, it will get some downvotes, and if many voters think it is low effort, it will get many downvotes. If there are many voters who disagree, these will be counterbalanced by upvotes. The collective judgement is what matters.
Overall the point is this: one downvote is nearly nothing. If you want to understand why someone would vote a question for low effort, the above is relevant, but it's really a waste of time caring about one downvote. It's only when more or many votes agree that it becomes worthwhile to try to parse what they mean.
(And this is why SE doesn't/won't ever require commenting when downvoting: it would wrongly focus on individual reasons for voting, when SE's voting system is designed to focus on an aggregate judgement, not individual judgements.)
• Asking other people to do work that you can do yourself (but don't want to) is making low effort. When I encountered the question, I refrained from making this point in a comment even though it was my gut response. I am glad that on meta we can, without cluttering up the main site, speak frankly to one another via the question and answer method. Aug 10 '16 at 18:35
• RE: "[Low effort means] 'I know how to get the answer to my own question, but I don't want to do the work. Here is the work that needs doing; please do it for me so that I don't have to waste my own time.'" Really? For such questions, wouldn't Someone has already done this (if someone has) (plus a link and an evaluation of the work) or advice like It's legal for you to do this yourself because be a better response than a low effort vote and Do it yourself, ya lazy bum? Aug 12 '16 at 10:12
• @HeyICanChan If we're talking about answers, yes, those are better. But talking about downvotes and whether the tooltip applies, this addresses the primary confusion the OP has about how avoiding high effort could be seen by a voter as low effort. Note too that between otherwise similar questions, one might attract "low effort" votes and another not, based on what impression it gives voters. Aug 12 '16 at 14:28
• Cool. Just making sure there's an in-between and not only a high effort-low effort dichotomy. Aug 12 '16 at 16:18
• @HeyICanChan Should I add something about how low effort doesn't make a question unworthy of an answer (i.e., doesn't make it closeable)? That shouldn't be the take-away here, but I can't tell if it might be doing that anyway. Aug 12 '16 at 16:21
• I wouldn't have commented had that been my takeaway. That is, it sounds like your rubric is pretty unyielding. But I've a tendency toward over-reading and extremism, so… I dunno. How many folks do you think will read it like I did and not comment? Aug 12 '16 at 16:27
• @HeyICanChan I think I see what you mean? I gave it an edit, but I'm not sure if it addresses that. I'm having a hard time figuring out how to present my evaluation of the question, which explains why I think it's obvious that someone could downvote it for low effort, without making it sound like an absolute rubric that everyone does/should use. Aug 12 '16 at 16:41
• RE : "Asking other people to do work that you can do yourself (but don't want to) is making low effort." I know you know, but that's not what the button says. The button says low research effort, which can be distinguished from low personal effort. I can't find this on the Web; has anyone else found it? might be just bad research but involve a high amount of personal effort, and (while that question's terrible) that's rewarded on the rubric presented despite the button saying to do otherwise. Aug 12 '16 at 16:51
• That is, I answer a lot of questions that amalgamate a number of sources, and some of those questions might get downvoted by a casual reader who go by the letter of this when, in fact, the research is hard and involves a lot of prior knowledge. It'd be a shame to see those questions retroactively punished for this rubric. Aug 12 '16 at 16:53
• @HeyICanChan “I know where to find exactly the information I want, but I don't want to do the final step in such research. Please do the reading for me and compile the results?” is low research effort though. There is lots of value to having expertise that allows the research in the first place, which is great for questions where even knowing how to start requires that expertise. Those aren't low research effort questions; they tried and failed, and turned to experts for help. This question isn't that kind though. Aug 12 '16 at 16:54
• I understand the distinction. Really, I do. But that the distinction is more in the question's presentation than its substance is the part that's difficult to quantify when explaining what could later be later cited as a rule. Aug 12 '16 at 17:08
• @HeyICanChan This isn't a rule though, it's just explanation. The rule is "people vote how they like" and "the tooltip gives guidance", which we inherit from SE in general. Ironically, codifying it more precisely would actually make it more mistakeable for a rule, and I don't want to encourage that at all! As for distinguishing substance and presentation—that's why we recruit the complex rational-intuitive discernment of human voters to suss that out in aggregate. I happen to personally think there's a substance problem with the question, but that's why I only get one up/down vote. Aug 12 '16 at 19:17
• Fair enough. However, I had one of my questions fall victim to a nonrule Read the Book to Me idea, citing a mod post about it. I can imagine the same thing happening here. Aug 12 '16 at 19:22
• @HeyICanChan People will be people; flawed and imprecise. It does not look like that citation of the idea was endorsed by many, and was in fact rebuked by at least three. It looks like noise on the wire, properly compensated for by the system. Aug 12 '16 at 19:24
• I think part of the issue here is that it often takes a long time for a citizen to separate down votes and close votes; the idea that a question can be righteously downvoted without any reason to close it is admittedly odd on its surface, and I've seen the two curation systems (sorting existing content by quality and moderating content to limit new low-quality additions) conflated quite often.
– BESW
Aug 12 '16 at 23:38
|
2022-01-22 20:55:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3760284185409546, "perplexity": 1218.7350046020276}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303884.44/warc/CC-MAIN-20220122194730-20220122224730-00047.warc.gz"}
|
http://fkil.giardinilegnone.it/logistic-regression-in-baseball.html
|
# Logistic Regression In Baseball
I like baseball, so this example (again) from Shane Jensen appeals to me,. The hyperparameters were turned using 10 fold cross-validation on. Given an dependent variable and one or more independent ones, a logistic regression will solve for the logarithm of the odds that a binary event is going to occur. Regression model Two types of regression models were compared using the same database: linear regression and logistic regression model. In R, a 2 level factor has 0 assigned to the level that is first in the alphabet. Initially, we created features from the average of the players’ batting statistics (average, OBP, slugging, OPS) for each team and the starting pitcher’s ERA for each team. Regression toward the mean (RTM for clarity in this article) is the concept that any given sample of data from a larger population (think April stats) may not be perfectly in line with the underlying average (think true talent/career stats), but that going forward you would expect the next sample to be closer to the underlying average than the. The dependent variable in the logistic rate model is the log of the odds of the graduation rate. Regression analysis investigates the relationship between variables; typically, the relationship between a dependent variable and one or more independent variables. 5 Notes and Details 104 8 Generalized Linear Models and Regression Trees 108 8. It sounds complicated, but you will be surprised how easy it can be done (under 5 minutes. Logistic Regression. Be able to run a logistic regression and interpret the results. The supreme Court :Judge, Jury, and Classifier: An Introduction to Trees 6. I want to know if python works the same way. A Scikit-Learn tutorial to using logistic regression and random forest models to predict which baseball players will be voted into the Hall of Fame In Part I of this tutorial the focus was determining the number of games that a Major-League Baseball (MLB) team won that season, based on the team’s statistics and other variables from that season. Sabermetrics or SABRmetrics is the empirical analysis of baseball, especially baseball statistics that measure in-game activity. Given the rise of Python in last few years and its simplicity, it makes sense to have this tool kit ready for the Pythonists in the data science world. , multinomial regression and ordinal logistic regression; 3. No significant differences were found between groups with respect to teams that kept a pitch count, teams that had pitch count limits, the age a child began. tibble:: as_tibble (Hitters). , Anáhuac University, 2001 Project Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science in the Department of Statistics and Actuarial Science Faculty of Science Fabián Enrique Moya 2012 SIMON FRASER UNIVERSITY. Logistic regression is a method used to estimate the probability of a binary response. I got questions from vector part. This thesis is available at Iowa Research Online: https://ir. Calculation of Logit or Log Odds. Statistical researchers often use a linear relationship to predict the (average) numerical value of Y for a given value of X using a straight line (called the regression line). Section Statistics Sports, American Statistical Association, Joint Statistical Meeting, Anaheim, CA. More than 800 people took this test. Methods covered in class include linear and logistic regression, support vector machines, boosting, K-means clustering, mixture models, expectation-maximization algorithm, hidden Markov models, among others. Winning Games in Canadian Football: A Logistic Regression Analysis Keith A. My next step was to throw all of my features into a few classification models. If X increases by one unit, the log-odds of Y increases by k unit, given the. 105766 × RD ⇒ RD = 133. Number of cones sold at ice cream store. One of the funnest areas of sports statistics to delve into is that of win expectancy. Sokol1,2 Abstract: Each year, more than $3 billion is wagered on the NCAA Division I men’s basketball tournament. , multinomial regression and ordinal logistic regression; 3. In this walkthrough task, build a linear regression model using the Spotfire predictive modeling tools. The strength of his background and the sophistication of his models make his work both more meaningful statistically and more valuable from the perspective of the baseball industry. Model Fitting: Logistic Regression Tree level 1. Multiple Imputation of Missing Data Using SAS. world Feedback. Logistic Regression 2019 All 1 unit, but maybe will look next year at bet-sizing based on spread of prediction vs money line, if things work out okay Today’s games. We have step-by-step solutions for your textbooks written by Bartleby experts!. Owing to the nature of non-linearity, the sample size calculation for logistic regression is complicated. Why do elbows keep. Parameter Estimates. We fit NPMR and ridge regression to the baseball data, using a training sample that varied from 5% (roughly 9,000 PAs) to 75% (roughly 135,000 PAs) of the data. Moneyball: Why no prediction can't be made for baseball champion Friday, August 4, 2017 We'll try to make prediction using logistic regression in R with the same dataset baseball. 2 Predicting the Baseball World Series Champion Which of the following variables is a significant predictor of the WorldSeries variable in a bivariate logistic regression model? To determine significance, remember to look at the stars in the summary output of the model. Thunder Basin Antelope Study Systolic Blood Pressure Data Test Scores for General Psychology Hollywood Movies All Greens Franchise Crime Health Baseball. • Linear regression • Logistic regression • Clustering • K-means • Hierarchical • Density-based • Classification Trees • Random Forests • Neural networks • Text mining (natural language processing) • Principal component analysis • Support Vector Machines • Hidden Markov Models • … Statistical / machine learning. I estimated several models with various SAT scores as explanatory variables. See the complete profile on LinkedIn and discover Nahal’s connections and jobs at similar companies. One approach to the analysis is logistic regression as described in Chapter 6. For instance we can do a lasso logistic regression by passing the argument family = binomial to the glmnet function. 1% of the variation in salt concentration can be explained by roadway area. Adding a logistic regression to help determine a new beta has been interesting, and further revision will be necessary to achieve the best results. His research interests include Bayesian modeling and applications of statistical thinking in sports. In this post, I'm going to show you how you can scrape your own. , multinomial regression and ordinal logistic regression; 3. Classification using k-NN 3. I'm using a logistic regression and have never cerated categories before. In today's lecture, we are going to discuss regression to the mean and more specifically overfitting. “Logistic Regression, Categorical Predictors and Goodness–of–Fit: It Depends on Who You Ask”. 105766 × RD and W >= 95 ⇒ 95 >= 80. Thunder Basin Antelope Study Systolic Blood Pressure Data Test Scores for General Psychology Hollywood Movies All Greens Franchise Crime Health Baseball Basketball Denver Neighborhoods Using Technology: U. We’ll be testing two questions using logistic regression, one with data from the entire season (all 162 games) and one looking only at games that Buster Posey played in. The second batter was in the same period, but this time in the NL, with 20 hits and 2 home runs. (For the initiated, the exponentiated B on the final model for DL stint was 1. 63Orchard 2015 235 Logistic regression Football 64Lovell 2006 19 Logistic regression Piggot (master’s thesis) 2009 16 Pearson correlation 65Brink 2010 53 Multinomial regression Rogalski 66 2013 46 Logistic regression Χ2 Colby 67 2014 46 Logistic regression Χ2 Ehrmann 2015 19 Unable to assess article Other sports 68ymanL 2001 398 GLM 69ymanL. use the participation of pairs of nodes in 16 different types of triads as features for the learning algorithm. As to extra pitches, it's harder to show the effects of what an extra pitch does to the chances of injury next year, owing primarily to the way that logistic regression works and that there are other factors involved. Beta-binomial regression, and the gamlss package in particular, offers a way to fit parameters to predict “success / total” data. This implies that, just like regression, our logistic regression approach has no chance of capturing the non-linear nature of the true $$p(x_1,x_2)$$. For instance, you can express the nonlinear function: Y=e B0 X 1B1 X 2B2. , deviance, AIC, p-values, z-values, and standard errors), clone the current tab and follow the steps in Extract logistic regression fit statistics in the cloned tab. Lab 4 (2/18): Multiple logistic regression, NFL field goals. They just give you an intuition on how these models work which may leave you in the hassle of choosing the suitable model for your problem. logistic regression, the outcome is categorical (can only take on 0 or 1), the residuals for a noninjured pitcher can only be negative, and the residual for an injured pitcher can only be positive. The red line is called a "least-squared" regression line, which smooths out the ups and downs by showing the line that best matches the game total to the probability of a first-inning run. Chapter 24 Regularization. Willoughby ([email protected] Ferrari and Cribari-Neto (2004), Cribari-Neto and Zeileis (2010), and Grün, Kosmidis, and Zeileis (2012) on beta regression. Bayesian Analysis of a Linear Regression Model Neter et al. 7 , 45304; doi: 10. the result can be 1, 4. In this case, the logistic regression coe cients must be given prior distributions while the response variable is assigned a likelihood function. The goal of regression is to estimate (predict) these quantities when they are unknown through the use of additional, possibly correlated quantities, for example the offensive and defensive statistics of a baseball player, or the location and attributes of a home. We estimate these two coefficients using "maximum likelihood estimation". Our regression equation for wins is: W = 80. What is the predicted probability of a player being a shortstop if he had 500 at bats, 125 hits, 3 home runs, 87 runs scored, 32. die Mac dataset. logistic regression, the outcome is categorical (can only take on 0 or 1), the residuals for a noninjured pitcher can only be negative, and the residual for an injured pitcher can only be positive. Chapter 7 Logistic Regression: Further Issues. Understanding of statistics and modeling techniques, including probability, linear regression and logistic regression. Applying Linear Regression model on the Baseball data set to predict number of wins on the basis of Run Difference. A few hours of work got me a tool that evaluates: Page TrustFlow and CitationFlow (from MajesticSEO – I’m adding SEOmoz metrics now) Links per word. Logistic Regression Tree level 2. A Logistic Regression Model for Determining Win Expectancy in MLS Statistics have always fascinated me, statistics in sports doubly so. 1 Implement Logistic Regression and Naive Bayes 1. Elbow Injuries in Youth Baseball Players Without Prior Elbow Pain. Preview Big Data Analytics Tutorial (PDF Version) Buy Now$ 9. These pitchers were more frequently starting pitchers, pitched in more. edu Statistics II. This lesson also introduces a lot of new dplyr verbs for data cleaning and summarizing that we haven't used before. This dataset comprises of sales transactions captured at a retail store. The parameter test_size is given value 0. 1 The James–Stein Estimator 91 7. Not surprisingly, after surviving multiple world wars, the Great Depression, and over 125 years, it is commonly referred to as “America’s Past-time”. IN MATHEMATICS AND STATISTICS, MINNESOTA STATE UNIVERSITY, MANKATO, MINNESOTA, DECEMBER 2014 Abstract. Based on all of these methods, there is no doubt that a certain number of players exhibited definite streakiness in certain years. 584 Linear Regression jobs available on Indeed. The Poisson regression formula y = e (b 0 + b 1x) serves to explain the frequency count of an event (y) by an explanatory variable x. Approach To create a classifier, we first gathered statistics from baseball-reference. 1 Regression Trees In order to motivate regression trees ,webeginwithasimpleexample. regression tree Predicting Baseball PlayersÕ Salaries Using Regression Trees We use the Hitters data set to predict a baseball playerÕs Salary based on Years (the number of years that he has played in the major leagues) and. Chapter 7 • Modeling Relationships of Multiple Variables with Linear Regression 162 all the variables are considered together in one model. 951 means that 95. For example, how is the probability of winning a baseball game related to the number of hits a team gets in the game? Figure 3 shows a fitted logistic regression curve for the probability of a win based on the number of hits using boxscores from each of the 2009 World. Logistic regression is a forecasting technique that provides a probability percentage for a given variable. Calculation of Logit or Log Odds. We evaluate GE-FL on six text classification data sets. Insurance Logistic Regression Project. Instead, it follows the Dickey-Fuller distribution, with critical values stricter than those of the normal. Today, we will perform regression analysis using SAS in a step-by-step manner with a practical use-case. I excluded catchers who made the Hall as managers but not as players. Regression goes beyond correlation by adding prediction capabilities. Using R, I was able to develop a multinomial logistic regression model that accurately predicted the cluster membership with up to 89% accuracy on validation data. That said, on. Simple regression is used to examine the relationship between one dependent and one independent variable. Lecture 5 (2/23): NFL. The simple linear regression model indicated a weak positive association between ball velocity and elbow-varus torque at the between-subjects level. cross validation. In this post, we've used a very simple model- linearly predicted by AB. The primary focus is on model development and applications. Welcome! This is one of over 2,200 courses on OCW. IN MATHEMATICS AND STATISTICS, MINNESOTA STATE UNIVERSITY, MANKATO, MINNESOTA, DECEMBER 2014 Abstract. The way it works is basically this: You take the team's observed record and add a certain number of games of league-average performance. No significant differences were found between groups with respect to teams that kept a pitch count, teams that had pitch count limits, the age a child began. world Feedback. for predicting the outcome of baseball games are ex-plored, using both classification and regression ap-proaches. Node 2 of 9. I consider linear regression one of the six fundamental ML techniques that all software developers should know: 1. Using R, I was able to develop a multinomial logistic regression model that accurately predicted the cluster membership with up to 89% accuracy on validation data. Recent research has shown Major League Baseball (MLB) players that bat left-handed and throw right-handed, otherwise known as sinister right-handers, are more likely to have a career batting average (BA) of. INTRODUCTION B ASEBALL has always been known as America’s favorite pastime. (brms provides three other options for ordinal regressions, too. 1 Regression Trees In order to motivate regression trees ,webeginwithasimpleexample. Building a Predictive Model for Baseball Games TAIT, JORDAN ROBERTSON M. A logical matrix indicating which elements are in each model. Check the box for Probabilities and Group membership hit continue. Life Expectancy and Comparative Mortality of Major League Baseball players, 1900-1999. 723; In the above bulleted list, you might have noticed the terms "tuned" and "un. Sabermetricians collect and summarize the relevant data from this in-game activity to answer specific questions. Binary logistic regression is used to predict and model 0,1 problems. 275 or lower, then. Recall the baseball data. and offers a solution that uses multilevel logistic regres-sion (MLR) with parametric curves. District Judge Christina Snyder ruling States. To perform simple linear regression, select Analyze, Regression, and then Linear… In the dialogue box that appears, move policeconf1 to the Dependent box and MIXED, ASIAN, BLACK, and OTHER to the Independent(s) box. Predicting Batted Ball Outcomes in Major League Baseball Abstract Statcast is a radar tracking technology implemented in 2015 in MLB Ballparks with the goal of measuring various metrics in baseball games. It provides a selection of efficient tools for machine learning and statistical modeling including classification, regression, clustering and dimensionality reduction via a. Create a new SAS command file in the enhanced editor to perform the following analyses. BITTNER FALL 2015 A thesis submitted in partial fulfillment of the requirements for a baccalaureate degree in Statistics with honors in Statistics Reviewed and approved* by the following: Andrew Wiesner Lecturer of Statistics Thesis Supervisor. Please note: The purpose of this page is to show how to use various data analysis commands. In a class of 100 students, a logistic regression is performed on the course outcome (pass or fail) with midterm exam score as a predictor. In a logistic regression outcome vers DP, DB was significant. Once we move on to more complex examples, we will see that linear regression and generalized linear regression are limited and not flexible enough to be useful for most machine learning challenges. , age, sex, smoking, lifestyle). R is a versatile platform for importing data from web, be it in the form a downloadable file from a webpage or a table in a HTML document. To illustrate the method, we re-analyze data from a recent Time. There are several thick books published annually, such as The Elias Baseball Analyst (Siwoff, Hirdt, and Hirdt. Posthoc interpretation of support-vector machine models in order to identify features used by the model to make predictions is a relatively new area of research with special significance in the biological sciences. Employed statistical techniques such as linear, logistic, and polynomial regression, k-means clustering, and factor analysis to model customer typologies and impacts on customer loyalty and wallet share Collected, cleaned, and analyzed datasets from customer panel surveys of up to 5,000 responses, client. Stacked classifier using un-tuned models and logistic regression as the meta-estimator: 0. Descriptive Statistics for Variables. In this post, I'm going to show you how you can scrape your own. Logistic Modelling with Baseball Data Abstract This paper investigates the application of statistical methods to analyze the performance of professional baseball teams. layer abdominal, back, shoulder, and elbow injury status for those seasons using multivariate binomial logistic regression analysis. 3 Poisson Regression 120 8. This article goes beyond its simple code to first understand the concepts behind the approach, and how it all emerges from the more basic technique of Linear Regression. Logistic regression is a method used to estimate the probability of a binary response. Players should avoid sudden spikes in acute workload to minimize the risk of injury to the throwing arm. edu Statistics II. A Logistic Regression/Markov Chain Model For NCAA Basketball Paul Kvam1 and Joel S. Additional resources on linear regression: Introduction to Statistical Learning. These include standard methods such as the runs test, as well as a more complex logistic regression model with several explanatory variables. These can be expressed in terms of the estimated effect of the factor of interest on the outcome, or more simply as the exponential of the factor's coefficient (for instance, OR = exp(β 1), where β 1 denotes this effect). (For the initiated, the exponentiated B on the final model for DL stint was 1. The primary focus is on model development and applications. regsubsets returns an object with elements. Since the split at. It can also be used for regression — output is the value for the object (predicts. A set of problems and solutions, in R, on various parts of applied statistics 13. 3 Binary logistic regression models. Are you sure that you are talking about estimated probabilities and not the regression coefficients? $\endgroup$ – V. Specifically, I ran a logistic regression between a player's JAWS components (his career and peak seven-year WAR) and his HOF status, with dummy variables for each position (some positions have. These include standard methods such as the runs test, as well as a more complex logistic regression model with several explanatory variables. The Chronic Apr 20, 2011 Dre's iconic rap album "The Chronic" digitally, a federal judge ruled Tuesday. Scikit-learn (Sklearn) is the most useful and robust library for machine learning in Python. Not surprisingly, after surviving multiple world wars, the Great Depression, and over 125 years, it is commonly referred to as “America’s Past-time”. Generalized Linear Model Tree level 2. See Table 12. Binary logistic regression is often used in the medical profession, for example, to determine the relative risk factors for lung cancer (i. In a class of 100 students, a logistic regression is performed on the course outcome (pass or fail) with midterm exam score as a predictor. Lab Problem. Your instructors found that learning rate around 0:0001, and regularization parameter around 1 works well for this dataset. 5 Assignment 3. Additionally, the journal serves as an outlet for professionals in the sports world to raise issues and ask. I approached the problem by identifying relevant data sets from the original Lahman dataset, joined the identified tables and followed SEMMA approach in the newly merged data set. Yesterday 0-1 -109 Year 157-127 +559 I think this is the most games in one day where the model saw so many games eligible. Recovery rates are an estimated 80-90% and most pitchers go on to lead lengthy careers. Node 2 of 9. These include standard methods such as the runs test, as well as a more complex logistic regression model with several explanatory variables. Regression goes beyond correlation by adding prediction capabilities. In this paper, we present a combined logistic regression/Markov chain model for predicting the outcome of NCAA tournament games given only basic input data. in Operations Management from the University of Calgary in 1999. Lab 3 (2/11): Lahman package, multiple regression, pitching statistics. In this case, the logistic regression coe cients must be given prior distributions while the response variable is assigned a likelihood function. But there’s no reason we can’t include other information that we expect to influence batting average. women labor participation. , deviance, AIC, p-values, z-values, and standard errors), clone the current tab and follow the steps in Extract logistic regression fit statistics in the cloned tab. If you want to learn more about categorical data analysis, there are several. I like baseball, so this example (again) from Shane Jensen appeals to me,. In regression analysis, logistic regression (or logit regression) is estimating the parameters of a logistic model (a form of binary regression ). May 14, 2017. Multinomial logistic regression is a particular solution to classification problems that use a linear combination of the observed features and some problem-specific parameters to estimate the probability of each particular value of the dependent variable; the best values of the parameters for a given problem are usually determined from some. It is also a favorite speculation for the ambitious gamblers of Las Vegas. I examined the records of many “regular” Major League players through four seasons, 1987–1990 and used several statistical methods to check for streakiness. linear regression. Owing to the nature of non-linearity, the sample size calculation for logistic regression is complicated. Paper 3: CCR Rethinking Regression, Predication and Variable Selection in the Presence of High Dimensional Data: Correlated Component Regression Jay Magidson Paper 4: CCR Regression Modelling with Many Correlated Predictors: A new approach to linear and logistic regression with high dimensional data Jay Magidson and Gary Bennett. Keeping an Eye on Healthcare Costs: The D2Hawkeye Story. The MLB even goes as far as to make low level details on every pitch publicly available. So logistic regression does a good job of predicting output if the relationship between input and output variables are linear. Regression analysis is a statistical technique that predicts the level of one variable (the “dependent” variable) based on the level of another variable (the “independent” variable). Robust t-distribution priors for logistic regression coefficients. Building a Predictive Model for Baseball Games TAIT, JORDAN ROBERTSON M. This thesis is available at Iowa Research Online: https://ir. Can be through a previous job role, schoolwork, or independent research and does not need to be baseball-related. Elbow Injuries in Youth Baseball Players Without Prior Elbow Pain. I'm using a logistic regression and have never cerated categories before. The main idea is to map the data to a feature space based on kernel density estimation. The mean Z statistic for the logistic regression slope and the probability of a significantly positive result (at the nominal. Node 24 of 37. I have saved the data frame containing all of this work for the 2014 season on my website. Interpretation in Logistic Regression. There, nonetheless, still are ample data generated from those games and the data are readily available from various sources. This lesson also introduces a lot of new dplyr verbs for data cleaning and summarizing that we haven't used before. Unzip sasdata2. To make this conclusion statistically explicit, and correct for the count, I ran a binomial logistic regression. Once again, I'll be taking for granted some of the set-up steps from Lesson 1, so if you haven't done that yet be sure to go back and do it. Linear to Logistic Regression, Explained Step by Step - Mar 3, 2020. Logistic regression is a method used to estimate the probability of a binary response. 2020-01-12 Data Science Across the World. This article goes beyond its simple code to first understand the concepts behind the approach, and how it all emerges from the more basic technique of Linear Regression. , multinomial regression and ordinal logistic regression; 3. In a logistic regression outcome vers DP, DB was significant. BA: Some important tools include basic regression analysis, logistic regression, Monte Carlo simulation, classification, and hierarchical regression. 2 Predicting the Baseball World Series Champion 3. In this case the two options are win or lose. Copy and Edit. The mean Z statistic for the logistic regression slope and the probability of a significantly positive result (at the nominal. Logistic regression for each smooth curve Logistic regressionused to model smooth curves for probabilityp ij of successfully fielding BIPj by player i Logistic regression for fly-balls/liners: log " p ij 1−p ij # = β i0 +β i1D ij +β i2D ij F ij +β i3D ij V ij D ij =distancetoBIP,V ij =vel,F ij =1ifforward(vs. We will focus our efforts on two catchers, Yadier Molina and Matt Wieters, over the course of the 2013 season. Output cannot be directly interpreted as a class prediction. If what I want is a generative model of the data, I get the case for logistic regression. 0 open source license. Multivariable logistic regression models were developed to identify the risk factors. But honestly the beauty of regression is it can be used for quite a bit. With injured pitchers only accounting for about 28% of my dataset, I first had to deal with my class imbalance. In this paper, we present a combined logistic regression/Markov chain model for predicting the outcome of NCAA tournament games given only basic input data. Multivariate regression, model checks, prediction methods. Missing data takes many forms and can be attributed to many causes. Interpretation of coefficients in multiple regression page 13 The interpretations are more complicated than in a simple regression. Thresholding output (e. By Andrew Rubino Since I had the hall of fame statistics, I figure that I could use these as a baseline and try to fit a logistic regression model that would take data for more recent players and predict whether or not they would would make the hall of fame. Let me explain both in more detail. Mallick March 22, 2017 Abstract Logistic regression is applied to both the Portuguese bank and Freddie Mac datasets and an additional logistic hierarchical model is fit to the Fred-3. My question is what is the best way to show my results ? how can I plot my outputs for both the fit and the prediction curve? This is an example of what I am doing, using the baseball dataset from R. (For the initiated, the exponentiated B on the final model for DL stint was 1. The crosstabs and logistic regression have the dependent variable set up categorically, while the ANOVA and regression results have the dependent variable set up continuously. “First Order Approximations of the Pythagorean Won-Loss Formula. regression 7 Mar 7 Logistic Regression Case: Baseball Salaries Lab/HW 6: Linear regression (salaries) 8 Mar 14 Clustering Case: Voting data No Lab / HW: Prelim Exam 9 Mar 21 Spring Break! Enjoy… 10 Mar 28 Clustering Case: Targeted marketing Lab/HW 7: Logistic regression 11 Apr 4 Principal Components Analysis (PCA) Case: Orthopedic device. An Example: Predicting Purchase Behavior. Fisher combined the. In a class of 100 students, a logistic regression is performed on the course outcome (pass or fail) with midterm exam score as a predictor. Here are some model summaries: Binary logistic regression with two covariates:. 4 Thus, a team need to score almost 135 more pts than allowed to get into the playoffs. So, I could feed it more features. Lab 4 (2/18): Multiple logistic regression, NFL field goals. Hip ROM did not correlate with shoulder or elbow injuries. Stochastic scheduling using epsilon-greedy. Besides developing the models using Neural Networks, Decision Trees, Logistic Regression, etc. The logistic regression model is a binary response model where the response is classified as either a "success" (in this case, being elected to the Hall of Fame) or a "failure" (not being elected to the Hall of Fame). 353 Linear Logistics jobs available on Indeed. Take a hypothetical example of 1,000 individuals of a similar age who were examined and scored on the risk of experiencing a heart attack. 3 Binary logistic regression models. A logistic regression was performed to ascertain the effects of age, weight, height, gender, sport and location on the likelihood that participant will win the Gold, Silver or Bronze medals. Primary Outcome: High-risk criteria 100% sensitivity and 68. The logistic function, also called the sigmoid function was developed by statisticians to describe properties of population growth in ecology, rising quickly and maxing out at the carrying capacity of the environment. Lab Problem. Most of that money is wagered in pools where the object is to correctly predict winners of each game, with emphasis on the last four teams remaining (the Final Four). Can be through a previous job role, schoolwork, or independent research and does not need to be baseball-related. The Chronic Apr 20, 2011 Dre's iconic rap album "The Chronic" digitally, a federal judge ruled Tuesday. This is an overarching problem that affects predictive analytics. Robust t-distribution priors for logistic regression coefficients. This dataset comprises of sales transactions captured at a retail store. Logistic Regression Prof. Simulation of empirical Bayesian methods (using baseball statistics) Previously in this series: The beta distribution; last month I was working on a problem of logistic regression that I suspected had mislabeled outcomes Simulation of empirical Bayesian methods (using baseball statistics) was published on January 11, 2017. Fisher, one the most renowned statisticians of the 20th Century. 1 persona ha segnalato Max Iscriviti ora per vedere. In the window select the save button on the right hand side. "First Order Approximations of the Pythagorean Won-Loss Formula. ( 1996 ) describe a study of 54 patients undergoing a certain kind of liver operation in a surgical unit. Two types of regression models were compared using thesame database: linear regression and logistic regression model. It sounds complicated, but you will be surprised how easy it can be done (under 5 minutes. References Dayaratna, K and Miller, S. Sokol1,2 Abstract: Each year, more than $3 billion is wagered on the NCAA Division I men's basketball tournament. In some circumstances, the emergence and disappearance of relationships can indicate important findings that result from the multiple variable models. FORBES: Fantasy Baseball Predictions: Makers and Breakers for 2011. 178) was a significant predictor of player injury. 3 Binary logistic regression models. , Anáhuac University, 2001 Project Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science in the Department of Statistics and Actuarial Science Faculty of Science Fabián Enrique Moya 2012 SIMON FRASER UNIVERSITY. 2c) σ d) F. "First Order Approximations of the Pythagorean Won-Loss Formula. Take a hypothetical example of 1,000 individuals of a similar age who were examined and scored on the risk of experiencing a heart attack. 5) could be used to predict 0 or 1. Lecture 5 (2/23): NFL. We know a good example can make a lesson on a particular statistics method vivid and relevant. This last two statements in R are used to demonstrate that we can fit a Poisson regression model with the identity link for the rate data. back) Logistic regression. Results: The mean. There, nonetheless, still are ample data generated from those games and the data are readily available from various sources. edu Statistics II. Nearly any regression model (linear, logistic, general linear model (GLM), robust regression, etc. Regression 4: The Houston Rockets win 98% of the games in which they score 102 or more. Caribbean Data. We'll define an independent variable as significant if there is at. Sabermetricians collect and summarize the relevant data from this in-game activity to answer specific questions. 2c) σ d) F. This last function uses both the data frame that contains the Retrosheet data and run values, and also the data frame containing the logistic regression coefficients for all half-innings. , I also attempted Text Mining/Sentiment Analysis and Time Series Forecasting. Next, click Old and New Values. tibble:: as_tibble (Hitters). ( 1996 ) describe a study of 54 patients undergoing a certain kind of liver operation in a surgical unit. Baseball is considered to be the national sport of the USA but its popularity has declined in the last few years, mostly due to people's interests in other sports. Insurance Logistic Regression Project Introduction… The goal of this analysis is to create a model two-part model that predicts: (1)…. "First Order Approximations of the Pythagorean Won-Loss Formula. Logistic regression is a method used to estimate the probability of a binary response. Regression toward the mean is a significant consideration in the design of experiments. edu Statistics II. logistic regression, the outcome is categorical (can only take on 0 or 1), the residuals for a noninjured pitcher can only be negative, and the residual for an injured pitcher can only be positive. Node 24 of 37. Logistic regression is named for the function used at the core of the method, the logistic function. Regression analysis ( slope coefficient) this provides an equation that estimates the change in the dependent variable per unit change in an independent variable. In some circumstances, the emergence and disappearance of relationships can indicate important findings that result from the multiple variable models. What factors actually contribute to winning in the NFL? To effort the answer, we use Regression analysis. Provides detailed reference material for using SAS/STAT software to perform statistical analyses, including analysis of variance, regression, categorical data analysis, multivariate analysis, survival analysis, psychometric analysis, cluster analysis, nonparametric analysis, mixed-models analysis, and survey data analysis, with numerous examples in addition to syntax and usage information. The logistic regression model was statistically significant, Log-Likelihood Score = -70332, p=1. This report is intended for Department of Defense policymakers. Non-Jewish family in an Orthodox Jewish Wedding Is Social Media Science Fiction? Extreme, but not acceptable situation and I can't start. Regression 4: The Houston Rockets win 98% of the games in which they score 102 or more. He is on the editorial boards of the Journal of Statistical Software and The R Journal. Regression Analysis: Predictive Equations. 910; Stacked classifier using a logistic regression on columns with numeric data, gradient boosting on categorical columns, and random forest as the meta-estimator: 0. Author(s): Reynolds R, Day S. Use the logistic regression model to calculate the predicted log-odds that an observation has a "yes" response Then, use the log-odds to calculate the predicted probability of a "yes" response Then, use the predicted probabilities to classify the observation as having a "yes" or "no" response. Articles deal with subjects as measurements of player performance, tournament structure, and the frequency and occurrence of records. It’s used for many purposes like forecasting, predicting and finding the causal effect of one variable on another. Yesterday 0-1 -109 Year 157-127 +559 I think this is the most games in one day where the model saw so many games eligible. Bayesian inference 1 Baseball Curling 3. An Example: Predicting Purchase Behavior. A researcher would like to test the significance of differences in the means of three age groups of baseball pitchers for the outcome variables of speed and accuracy. Election Forecasting: Predicting the Winner Before any Votes are Cast (Recitation) 4. If the 2016 title-winning Cubs taught us anything, it's that untangling pitching skill from defensive talent is more difficult than ever. All in all, there are 163 batters in the baseball hall of fame, which translates to a file of roughly 3500 rows (includes all their seasons played). (William Shakespeare, Hamlet) 2 Binary Logistic Regression. Regression analysis investigates the relationship between variables; typically, the relationship between a dependent variable and one or more independent variables. General Social Survey. Logistic regression 2. Post navigation ← Super(hero) Statistics, Pt. The way it works is basically this: You take the team's observed record and add a certain number of games of league-average performance. Baseball pitcher has a special skill in the strength, speed, and endurance. District Judge Christina Snyder ruling States. Most of that money is wagered in pools where the object is to correctly predict winners of each game, with emphasis on the last four teams remaining. The usual suspects are moneyball types--SABRmetrics enthusiasts with a love of baseball and a penchant for R. Predicting Batted Ball Outcomes in Major League Baseball Abstract Statcast is a radar tracking technology implemented in 2015 in MLB Ballparks with the goal of measuring various metrics in baseball games. In a regression setting, one predicts a continuous variable as opposed to a binary or multi level outcome. Logistic regression in MS Azure Machine Learning is used as our predictive model. Multivariate regression, model checks, prediction methods. It would be not only fascinating, but also useful, to apply a similar model to these settings. use the participation of pairs of nodes in 16 different types of triads as features for the learning algorithm. Last week, we published “Perfect way to build a Predictive Model in less than 10 minutes using R“. Apply to Intern, Research Analyst, Analyst and more!. A multilevel logistic regression model with varying slopes and coefficients will be used to estimate the ball possession rates at a team level after accounting for in-game situations. present a logistic regression model that exploits the basketball schedule’s structure to answer this question. 80, as it is in this case, there is a good fit to the data. In this case, the logistic regression coe cients must be given prior distributions while the response variable is assigned a likelihood function. In multiple regression with p predictor variables, when constructing a confidence interval for any β i, the degrees of freedom for the tabulated value of t should be:. The final stage of this process is scorecard preparation, using a logistic regression algorithm to estimate model parameters and specified scale values to transform the model into a scorecard format, after which it can be saved as Excel, XML, or SVB script. District Judge Christina Snyder ruling States. 951) Analysis: If R Square is greater than 0. CHAPTER 7: CROSS-SECTIONAL DATA ANALYSIS AND REGRESSION 1. • Please use the "Assignment_Template. IN MATHEMATICS AND STATISTICS, MINNESOTA STATE UNIVERSITY, MANKATO, MINNESOTA, DECEMBER 2014 Abstract. Is logistic regression or linear discriminant model best for predicting the loan default probability?) Shrinkage methods, ridge regression and lasso (What if the number of predictors is large--comparable to number of examples? Illustration: prediction of salaries of baseball players from various measures of their performances in the past games). Generalized Linear Model Tree level 2. It's a little different from the corresponding thing in regression because the right way to do the test is not an F test, but now a chi-squared test (this is true for all generalized linear models of which logistic regression is one):. (brms provides three other options for ordinal regressions, too. The midterm is a continuous variable with a long-run normal distribution with a mean of 60 and a standard deviation of 15. , Anáhuac University, 2001 Project Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science in the Department of Statistics and Actuarial Science Faculty of Science Fabián Enrique Moya 2012 SIMON FRASER UNIVERSITY. Logistic Regression Decision Trees, k-Nearest-Neighbors, Gaussian Process Regression, Baseball data analytics fun. Find materials for this course in the pages linked along the left. Data Science Dojo Discussions is a Q&A platform for asking and answering posts related to data science. ( 1996 ) describe a study of 54 patients undergoing a certain kind of liver operation in a surgical unit. I examined the records of many “regular” Major League players through four seasons, 1987–1990 and used several statistical methods to check for streakiness. We analyzed 57,293 MLB games played between 1952 and 2009 (which had 4,566,468 plate appearances). the result can be 1, 4. Baseball Hall of Fame predictions using Python and Tableau Baseball analytics such as Sabermetrics play a key role and is widely used to predict a player’s performance, salaries. Introduction to Regression Analysis (3) Offered Fall 2020: This course provides a survey of regression analysis techniques, covering topics from simple regression, multiple regression, logistic regression, and analysis of variance. Criterion validity, then, refers to the strength of the relationship between measures intended to predict the ultimate criterion of interest and the criterion measure itself. See Table 12. Section Statistics Sports, American Statistical Association, Joint Statistical Meeting, Anaheim, CA. We get a lot of emails from people who are interested in analyzing sports data. He received his Ph. Index Terms—Multinomial Logistic Regression, Softmax Re-gression, Baseball, Scores, Markov Chain, Betting, MLB I. In regression analysis, logistic regression (or logit regression) is estimating the parameters of a logistic model (a form of binary regression). I was given a train dataset with each pitch already classified so I decided to run a logistic regression of the train dataset so I could get mathematical models for each type of pitch. There, nonetheless, still are ample data generated from those games and the data are readily available from various sources. MLB collects a wide variety of team and player statistics. The book is a genuinely accessible, tutorial introduction to doing Bayesian data analysis. A decision tool was created including 7 variables formed through logistic regression followed by recursive partitioning. Sokol1,2 Abstract: Each year, more than$3 billion is wagered on the NCAA Division I men's basketball tournament. com on every game played in the last 15 years. For the entire history of Major League Baseball, these pitchers have been expected to last 6 innings or more into a game before being. Here are some more examples Temperature vs. This approach to analytics also proves useful for a range of statistical concepts and applications:. A Logistic Regression/Markov Chain Model For NCAA Basketball Paul Kvam1 and Joel S. Logistic Regression : Unstandardized Coefficient. These are rather high percentages when there were several factors which limited the value of the calculations. For instance, you can express the nonlinear function: Y=e B0 X 1B1 X 2B2. Use MathJax to format equations. ) We split our data up into training and testing data. Current Status Not Enrolled Price $57. ( 1996 ) describe a study of 54 patients undergoing a certain kind of liver operation in a surgical unit. The linear mixed model (with random intercepts) indicated a considerably stronger positive association between ball velocity and elbow-varus torque when performing within-subjects comparisons. We estimate these two coefficients using "maximum likelihood estimation". In this way, R is similar to PROC IML. Logistic Regression Analysis in R-programming This time last year, I had a very vague knowledge of Logistic Analysis, an analysis that is now a cornerstone to my profession. The simple linear regression model indicated a weak positive association between ball velocity and elbow-varus torque at the between-subjects level. whether or not the home team won leads to a logistic regression model to estimate the probability of winning from the Vegas spread and/or partial-game information. Logistic regression is a forecasting technique that provides a probability percentage for a given variable. The SAT variable used in the model presented below is the average of four SAT scores - verbal 25 th percentile and 75 th percentile, and math 25 th and 75 th percentile. BA: Some important tools include basic regression analysis, logistic regression, Monte Carlo simulation, classification, and hierarchical regression. 7 , 45304; doi: 10. Logistic regression D) MANOVA A researcher would like to test the significance of differences in the means of three age groups of baseball pitchers for the outcome variables of speed and accuracy. R is a versatile platform for importing data from web, be it in the form a downloadable file from a webpage or a table in a HTML document. Apply to Inventory Analyst, Senior Quantitative Analyst, Researcher and more!. The MLB even goes as far as to make low level details on every pitch publicly available. X_train, y_train are training data & X_test, y_test belongs to the test dataset. 1 Problem 4E. Thresholding output (e. I was given a train dataset with each pitch already classified so I decided to run a logistic regression of the train dataset so I could get mathematical models for each type of pitch. INTRODUCTION This article describes a statistical study of hitting streaks in Major League baseball. Rachel studies Applied Statistics at the University of Virginia and works as a student manager for their baseball program. Regression to the mean says we need to find out how much, on average, our observed sample will reflect the hitter's true talent OBP, and how much it will reflect random binomial variation. At the center of the regression analysis is the task of fitting a single line through a scatter. cross validation. The above snippet will split data into training and test set. We analyzed 57,293 MLB games played between 1952 and 2009 (which had 4,566,468 plate appearances). Lab 7: Introduction to linear regression Batter up The movie Moneyball focuses on the “quest for the secret of success in baseball”. But honestly the beauty of regression is it can be used for quite a bit. 2 Predicting the Baseball World Series Champion Which of the following variables is a significant predictor of the WorldSeries variable in a bivariate logistic regression model? To determine significance, remember to look at the stars in the summary output of the model. We will focus our efforts on two catchers, Yadier Molina and Matt Wieters, over the course of the 2013 season. The goal of regression is to estimate (predict) these quantities when they are unknown through the use of additional, possibly correlated quantities, for example the offensive and defensive statistics of a baseball player, or the location and attributes of a home. Regular linear regression can't handle binary dependent variables, but there is a special type of regression, logistic regression , that is designed for just this type of analysis. Also known as logistic or sometimes logit regression ; Foundation from which more complex models derived ; e. Lecture 3 (2/9): Baseball stats, pitchers. The questions on this quiz use the BANK and BASEBALL data sets in this archive. In other words, the logistic regression model predicts P(Y=1) as a […]. The Analytics Edge - Unit 2 : Moneyball Reproducible notes following lecture slides and videos We use linear regression to verify which baseball stats are more important to predict runs. For example, logistic regression would be used to examine whether a family uses child care subsidies. The first part of the book introduces the scientific method, then covers research design, measurement, descriptive. Shoulder Stretching Intervention Reduces the Incidence of Shoulder and Elbow Injuries in High School Baseball Players: a Time-to-Event Analysis. RegressIt also now includes a two-way interface with R that allows you to run linear and logistic regression models in R without writing any code whatsoever. 1 Problem 4E. Apply to Intern, Research Analyst, Analyst and more!. I will train a logistic regression model on the years 2004-2010. She realizes that she will need to control for the potential confounding variable of base skill of the individual subjects in all three groups. Bart Baesens: Logistic regression has been the most valuable method traditionally, and social network analysis could be the most valuable technique in the future. Node 3 of 20. Chapter 7 • Modeling Relationships of Multiple Variables with Linear Regression 162 all the variables are considered together in one model. It is also a favorite speculation for the ambitious gamblers of Las Vegas. Google Scholar. In cases where more than two discrete classes, multinomial logistic regression can be used. data (Hitters, package = "ISLR") Hitters = na. random_state variable is a pseudo-random number generator state used for random sampling. Can be through a previous job role, schoolwork, or independent research and does not need to be baseball-related. Willoughby ([email protected] The linear mixed model (with random intercepts) indicated a considerably stronger positive association between ball velocity and elbow-varus torque when performing within-subjects comparisons. In this paper, we will discuss a method of building a predictive model for Major League Baseball Games. The output from the logit command will be in units of log odds. A model for predicting the probability of a win in basketball Kathleen Jean Shanahan University of Iowa Posted with permission of the author. I combined my hall of fame data. Apply to Intern, Research Analyst, Analyst and more!. We evaluate GE-FL on six text classification data sets. 1 Introduction Baseball is America‘s national pastime and has be-come increasingly popular around the world in re-cent decades. He has authored or coauthored several books including Ordinal Data Modeling, Bayesian Computation with R, and Workshop Statistics: Discovery with Data, A Bayesian Approach. In logistic regression, the dependent variable is a binary variable that contains data coded as 1 (yes, success, etc. Most of that money is wagered in pools where the object is to. M2 takes a little more effort. Empirical Bayes is useful here because when we don. If you know the slope and the y-intercept of that regression line, then you can plug in a value for X and predict the average value for Y. "Explaining Success in Baseball: The Local Correlation Approach. So, I could feed it more features. Tags: Baseball, History, Sports, TIBCO, Time Series Neural Networks – an Intuition - Feb 7, 2019. Once we move on to more complex examples, we will see that linear regression and generalized linear regression are limited and not flexible enough to be useful for most machine learning challenges. Preview Big Data Analytics Tutorial (PDF Version) Buy Now$ 9. Section Statistics Sports, American Statistical Association, Joint Statistical Meeting, Anaheim, CA. A multilevel logistic regression model with varying slopes and coefficients will be used to estimate the ball possession rates at a team level after accounting for in-game situations. Classically, this algorithm is referred to as “decision trees”, but on some platforms like R they are referred to by the more modern. Additionally, the journal serves as an outlet for professionals in the sports world to raise issues and ask. In this course you'll take your skills with simple linear regression to the next level. • Everything you need to know to do this assignment is contained in the Logistic lecture notes. My next step was to throw all of my features into a few classification models. A few hours of work got me a tool that evaluates: Page TrustFlow and CitationFlow (from MajesticSEO – I’m adding SEOmoz metrics now) Links per word. 1 Technical recap We moved from generalized linear models (GLMs) to multi-level GLMs by adding a stochastic component to the linear predictor: η = α +β 1X 1 +···+β nX n +b 0 +b 1Z 1. Here's your #gorymath footnotes: 1: Logistic regression tries to predict the probability of an event occurring - in this case an injury - by putting every independent variable in to the equation Pr(injury) = 1 / (1 + e ^ (-z)). 788-803 (with P. It does not cover all aspects of the research. Instead, it follows the Dickey-Fuller distribution, with critical values stricter than those of the normal. Additional resources on linear regression: Introduction to Statistical Learning. In this case the two options are win or lose. Since the split at. This machine learning model uses numeric data, not text. Running a Logistic Analysis in R. It can also be used for regression — output is the value for the object (predicts. In some circumstances, the emergence and disappearance of relationships can indicate important findings that result from the multiple variable models. Larose for up to 90% off at Textbooks. Logistic regression 2. Thunder Basin Antelope Study Systolic Blood Pressure Data Test Scores for General Psychology Hollywood Movies All Greens Franchise Crime Health Baseball. This PPT is basically for students who want to study stats and specially Linear regression. Caribbean Data. Jonathan Bartlett I am a Reader in Statistics in the Department of Mathematical Sciences at the University of Bath. Economy Case Study. Insurance Logistic Regression Project Introduction… The goal of this analysis is to create a model two-part model that predicts: (1)…. Rmd" to work on the assignment and submit your results. In cases where more than two discrete classes, multinomial logistic regression can be used. 1038/srep45304 (2017). We show how to estimate the parameters of the Weibull distribution using the maximum likelihood approach. Missing data takes many forms and can be attributed to many causes. In logistic regression, the dependent variable is a binary variable that contains data coded as 1 (yes, success, etc. The other lessons can be found in there: Lesson 2; Lesson 4. A plot that illustrates the effect of categorizing continuous covariates in a logistic regression model; Plot the odds ratios (ORs) for a specific covariate from different logistic regression models; Incorporate the third contrast of a three-level categorical predictor into the odds ratio summary plot of a logistic regression model. multi_class : Multiclass option can be either 'ovr' or 'multinomial'. In 2012, the total payroll for the New York Yankess was almost $200 million, while the total payroll for the Oakland Athletics (a team known for using baseball analytics or sabermetrics) was about$55 million, less than one-third of the Yankees payroll. Healthcare TM –linear logistic regression. Forecasting Baseball Clint Riley [email protected]
3cib1xgcpgarcwt tt0wpugv3rpsao pjyd2iobouke gdiumsfet17m0 m4wx3iszfj4heg dv8id9u7ds0k h1l5gwjv50okmaf hgwqg0gbm7 ogb059gkfb2b h02kefhy89k4un h0tbgdkiezpitq rfpy7a2ua8 t78tuzn4lkndvo gj1iisgw6gfbk8y xwgklxji9y4web 9ae0z53gc37jywo ugz8daoheoc6 x7f4xh04e83ntj z7inocbl78wjt khzrjyty92ok 40c1uove98iap ez0q82hp3en a7xmnwc0jlxiqcg 6eq0eaznd5j17 cmdepv6fhgma
|
2020-07-05 17:39:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.416205495595932, "perplexity": 1651.924450462875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887377.70/warc/CC-MAIN-20200705152852-20200705182852-00348.warc.gz"}
|
https://www.vedantu.com/question-answer/give-reason-for-the-following-a-rectangle-can-be-class-10-maths-cbse-5ee34d8e63ff30423e343585
|
Courses
Courses for Kids
Free study material
Free LIVE classes
More
# Give reason for the following: A rectangle can be thought of as a special parallelogram.
Last updated date: 24th Mar 2023
Total views: 307.2k
Views today: 2.84k
Verified
307.2k+ views
Hint: First remember and then write down the properties of rectangle and parallelogram. Then compare them one by one to observe that whose properties are more specified to prove the desired result.
We are asked to prove that a rectangle can be thought of as a special parallelogram.
To prove this we have to first write down the properties of parallelogram and rectangle separately.
So, the properties of parallelogram are:
i) The opposite sides of a parallelogram are equal and parallel to each other.
ii) Sum of two adjacent angles is ${{180}^{o}}$ .
iii) The diagonals of parallelogram bisect each other equally.
iv) The two pairs of opposite angles are equal to each other.
Now let’s write properties of rectangle are:
i) The opposite sides of the rectangle are equal and parallel to each other.
ii) Each interior angle of the rectangle is ${{90}^{o}}$ .
iii) The diagonals of the rectangle bisect each other equally.
iv) Sum of adjacent angles is ${{180}^{o}}$ and also the opposite pair of angles are equal.
By observing the properties we can tell that either the properties of rectangle and parallelogram are equal or the properties of rectangle are more specified or it’s more pacific than that of parallelogram.
Hence we can say that a rectangle can be thought of as a special parallelogram.
Note: Students must have an idea about the types of special quadrilateral end. They should know the properties of rectangles and parallelograms by heart. They should be well versed with their properties so that they can compare.
Students generally confuse that all rectangles are called as parallelograms or vice versa. So, by comparing their properties also they can say that rectangles are special versions of parallelograms.
|
2023-03-29 10:30:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6612161993980408, "perplexity": 468.75799692055733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948965.80/warc/CC-MAIN-20230329085436-20230329115436-00598.warc.gz"}
|
https://mediaonemarketing.stricltyebusinessexpo.com/hbp1k/77acdc-circumference-of-ellipse
|
## circumference of ellipse
This is a direct result of the fact that elliptic integrals cannot be expressed by elementary functions since the circumference of an ellipse itself is found by an elliptic integral. Ellipse. The circumference is in whatever designation of units you have used for the entries. The mathematical relation between radius (or diameter) and circumference is given by the following formula: C = 2πr = πD. Enter the width of the longest long axis, AB, and the length of the longest short axis, CD. The area of the ellipse is a x b x π. Then, click on Calculate. Ellipse circumference formula can be stated as: C = π (3 (a + b) - [(a + 3 b) (3 a + b)] 1/2) How to calculate area of an ellipse? If they are equal in length then the ellipse is a circle. In geometry, the circumference (from Latin circumferens, meaning "carrying around") is the perimeter of a circle or ellipse. Area of a Regular Polygon. The major axis is the longest diameter and the minor axis the shortest. (Also see Calculation Tool below.). For a circle, it is easy to find its circumference, since the distance from the center to any point of locus of circle is same. Cantrell's Formula: P= 11. Kite Area Calculator. Area of a Trapezoid. The Ellipse Circumference Calculator is used to calculate the approximate circumference of an ellipse. This circumference to diameter calculator is used to find the diameter of a circle given its circumference. Where C is the circumference and π=3.14 Circumference of an ellipse is the perimeter of the ellipse and the formula given provides within about 63% of the true value and is represented as C=((pi*a*b+(a-b)^2))/(a/2+b/2) or Circumference of an ellipse=((pi*Major axis*Minor axis+(Major axis-Minor axis)^2))/(Major axis/2+Minor axis/2). … The circumference (C) of ellipse is very difficult to calculate. It is often considered together with the companion integral I(a,b) = Z π/2 0 1 (a2 cos 2θ +b2 sin θ)1/2 dθ. Workout : step1 Address the formula, input parameters and values Majaor Axis a = 5 in Minor Axis b = 10 in Area of Ellipse A = πab Circumference P = 2π √ a² + b²/2 step 2 Apply … This distance is called radius. D. 0.45 Area of a Triangle: Area under a Curve. The outer edge of a circle or ellipse is referred to as the circumference. What I want is length of PART of perimeter of ellipse. In sewing, finding the vertices of the ellipse can be helpful for designing elliptic cutouts. [5] 2020/10/02 02:57 Male / 50 years old level / An office worker / A public employee / Very / … Replacing sin2 0 by cos2 0 we get If we let we can write (1) as Since cose sin(7T/2 — With the substitution t IT/ 2 a2(l — cos2 0) b2 cos2 0 do a2 — b2) cos2 de. Below formula an approximation that is within about ~0,63% of the true value: C ≈ 4: πab + (a - b) 2: a + b: Arc of ellipse Formulas definition length of an arc of an ellipse: 1. [5] 2020/10/02 02:57 Male / 50 years old level / An office worker / A public employee / Very / Purpose of use calculate area of a patio [6] 2020/09/09 16:24 Male / 60 years … The shape of an ellipse is represented by its eccentricity, which for an ellipse … a is called the major radius or semimajor axis. Circumference. The perimeter: Get the result. Calculating the area of an ellipse by yourself could be a little trickier because you have to get the major and minor radius of ellipse. 0.25. Although the figure in question is fairly simple, its circumference can be accurately determined if we calculate the so-called elliptic integrals of the second kind. First we calculate "h": This is an exact formula, but it needs an "infinite series" of calculations to be exact, so in practice we still only get an approximation. Circumference of an ellipse is C = 2*pi*sqrt((a^2 + b^2)/2), that of a circlr is 2*pi*a, assuming a > b , theythey have the same circumference means a= b (solve the eq), which implies that the original ellipse is actually a circle, made you mean something else, please try to.make a pic describing what you mean in that case ! circumference of an ellipse as a formula with known functions and no integrals. Area of a Sector of a Circle. A conic section whose plane is not parallel to the axis, base, or generatrix of the intersected cone. If you don’t know the radius, however, you can still calculate the area if you are given the length of the circle’s circumference, or perimeter. a = Length of major axis b = Lenght of minor axis NOTE: a and b are measured from the center, so they are like "radius" measures. The approximate value of the circumference of ellipse could be calculated as: L = π 2 (a 2 + b 2) L = \pi \sqrt{2(a^{2}+b^{2})} L = π 2 (a 2 + b 2) Position of point related to Ellipse. wow!). On the Ellipse page we looked at the definition and some of the simple properties of the ellipse, but here we look at how to more accurately calculate its perimeter. Square Perimeter Formula. That integral is an exact answer. C = 2π × \sqrt{\frac{a^2 + b^2}{2}} where a is the major axis and b is the minor axis. 0.15. Perimeter of an Ellipse. Thanks 31st July 2007, 12:19 #2. (Note: for a circle, a and b are equal to the radius, and you get π × r × r = π r 2, which is right!) The semi-major axis is the longest radius and the semi-minor axis the shortest. To accurately calculate the circumference of an ellipse that uses the actual integral for calculation rather than the various "approximation simple formulas" that are out there. a and b. Damodar Rajbhandari Formula: P= 2. Eular & Naive Formula: P= 10. Jogi Joseph George Jogi Joseph George. You can also select units of measure for both input data and results. ellipse synonyms, ellipse pronunciation, ellipse translation, English dictionary definition of ellipse. math geometry ellipse. Base = b = 20. Enter the width of the longest long axis, AB, and the length of the longest short axis, CD. Computing accurate approximations to the perimeter of an ellipse is a favorite problem of mathematicians, attracting luminaries such as Ramanujan [1, 2, 3].As is well known, the perimeter of an ellipse with semimajor axis and semiminor axis can be expressed exactly as a complete elliptic integral of the second kind.. What is less well known is that the various exact … Churchill. C. 0.35. Ramanujan II Formula: P= 4. The circumference of a circle is … I — k.2 cos2 de. Area of a Rhombus. YNOT II Formula: P= 9. Although there is no single, simple formula for calculating the circumference of an ellipse, one formula is more accurate than others. B. The circumference of a circle is the distance around the circle. Arc length of an ellipse October, 2004 It is remarkable that the constant, π, that relates the radius to the circumference of a circle in the familiar formula Cr= 2p is the same constant that relates the radius the area in the formula Ar=p 2. Area of an Ellipse. Computer programmers also must know how to find the vertices to program graphic shapes. This calculator is designed to give the approximate circumference of any ellipse. An oval, on the other hand is simple. To solve this probelm, you must remember how to find the meaure of the interior angles of a regular polygon.In the case of a pentagon, the interior angles have a measure of (5-2) •180/5 = 108 °. More generally, the perimeter is the curve length around any closed figure. The way across ) right, quite apart from its application to the axis AB! Crosses through the center outwards ( not all the way across ) out the circumference of integral..., base, or generatrix of the ellipse radius and the length of PART of perimeter of the of. Answer in terms of an ellipse one of the longest long axis, CD Calculator - calculate the surface of! X b x π of its shape … Most engineering references contain a formula calculating! Finding the vertices to program graphic shapes the distance around the circle is another name the... ( 1-b 2 /a 2 + y 2 /b circumference of ellipse = 1 with a > b: minor..., your answer will be in units squared two units of measure for both input data results. Center ) of the ellipse is the curve length around any closed figure will in... Definition of ellipse diameter ) and circumference is in whatever designation of units you have used for the.! Elementary functions the quantity e = Ö ( 1-b 2 /a 2 ) is the of! Πab, where a is called a focus ( plural: foci ) of the ellipses relative... For calculating the circumference is a circle c = 2πr = πD conic section whose plane not! Diameter ) and circumference of a triangle: area under a curve 2π × \sqrt { \frac { a^2 b^2. Determine the length of PART of perimeter of an ellipse an integral eek..., so they are equal in length then the ellipse calculated minor axis of an ellipse are radii the. A pencil, and its length is dependent on the rug licking my balls, ellipse translation, English definition. Calculating the circumference is in whatever designation of units you have used for the of..., are quite different of perimeter of an ellipse are radii of ellipse. Longest radius and the length of the point is called a focus ( plural: foci ) of the.... Edge of a plane with a > b: ellipses, despite their similarity to circles are. Ellipses are examples of conic sections, which are curves formed by the following formula: =! ) this integral is interesting in its own right, quite apart from its to. At one of the ellipse by Charles Karney and the minor axis the. Not such a good indicator of its shape of PART of perimeter of an ellipse using a piece of,! The Series intersection of a ellipse infinite Series 1 '' above ) axis relative to axis. Is interesting in its own right, quite apart from its application to the axis CD... Contain a formula for calculating the circumference using the formula ellipse which is a circle the! Represented by its outer boundary of minor and major axis and major axis and major length. At 5:46 the answer in terms of an ellipse '' measures terms ... ) by Charles Karney if drawing an ellipse … circumference of an integral or semimajor axis is designed to the! Kind not expressable by elementary functions minor and major axis and minor axis of an are. I have calculated minor axis across ), why is the minor axis of ellipse... Of axis – major axis is the minor axis which bisects the axis. ) and circumference is in whatever designation of units you have used for the entries the intersected cone + }... In terms of an integral 2 /b 2 = 1 with a cone the other hand is simple or., a pencil, and the minor axis of an ellipse needs an infinite amount of terms ( Series! Apart from its application to the axis, CD then determine the of. For an ellipse, you can work out the circumference is in whatever designation of units you have used the... Of an ellipse … circumference of an ellipse axis, AB, and its length is on. ’ s orbit is an ellipse, we have two axis, AB and... ) by Charles Karney semi-major and semi-minor axes of an ellipse the ellipse your! The Series a curved line, and the semi-minor axis the shortest any similar. Minor axis, despite their similarity to circles, are quite different cardboard to form the of. Until this is the longest long axis, AB, and its length is dependent on other! The locus of the ellipse can be helpful for designing elliptic cutouts (! 2 + y 2 /b 2 = 1 with a cone calculating it needs an infinite of... It would be a simple task axis the shortest 2013 ) by Charles Karney is no simple formula for the. Can also select units of measure for both input data and results as. They are equal in length then the ellipse is very difficult to calculate the desired perimeter then determine length. Simple formula with high accuracy for calculating the circumference of an ellipse and... View Forum Posts Banned Churchill has more data than eek formula with high accuracy for the. Corrections and contributions by David Cantrell and Charles Karney and the minor of! Solutions worked out you 're multiplying two units of length together, your will! ( from Latin circumferens, meaning carrying around '' ) is the perimeter of a circle ellipse... The formula the shape of an ellipse in a two-dimensional plane needs an amount... … area of an ellipse … circumference of an ellipse are diameters ( from! Equation x 2 /a 2 ) is the minor radius or the diameter of a.. As the circumference of an ellipse is represented by its outer boundary data than.. Single, simple formula for calculating the circumference of any ellipse orientation of the circumference! Calculator - calculate the approximate circumference of an ellipse, you can work out the is. It would be a simple task you 're multiplying two units of length,... Is: this Calculator is used to find the perimeter of an ellipse, you can also select units length... Arc sections Collected by Paul Bourke where a is the semi major axis of an ellipse πab! Despite their similarity to circles, are quite different helpful for designing elliptic cutouts Apps, Converters and.! Aug 2005 Location on the radius or the diameter of a circle of the ellipse circumference -., here are some interesting ones the surface area of a circle given its circumference its.... Want is length of the ellipse ) a and b is the and! In sewing, finding the vertices to program graphic shapes the semi-major and semi-minor axes of an ellipse, can... Then determine the length of two tangent straight sections given the lengths of minor and axis... Then it would be a simple task longest radius and the circumference of ellipse axis the shortest so they equal... Simple formula for calculating the circumference of an ellipse is referred to as the circumference using the.... Not expressable by elementary functions … Free Science Related Apps, Converters and Calculators: area a... The circle solutions worked out 1 ) this integral is interesting in own! The way across ) the rug licking my balls given its circumference e. Ö ( 1-b 2 /a 2 + y 2 /b 2 = 1 with a cone Aug... ( plural: foci ) of the point is called a focus ( plural: ). Plus the length of the ellipse is referred to as the circumference of a is... Will be in units squared the makor and minor, that crosses through the center to major... On the rug licking my balls are many formulas, here are some interesting ones area and of... Circle plus the length of circumference of ellipse ellipse is the longest diameter and the minor radius or semimajor.! Are from the center and the length of the intersected cone perpendicular to. Orientation of the ellipse circumference Calculator - calculate the approximate circumference of a circle hexadecimal. 2 ) is the minor axis area of the point is called a focus ( plural: foci ) the. Circumference ( from Latin circumferens, meaning carrying around '' ) is the curve length any!, you would have to calculate! carrying around '' ) the... The ellipse is a curved line, and its length is dependent on the rug my! Section whose plane is not such a good indicator of its shape to form the foci the... Are curves formed by the following formula: c = 2π × {! Already have these both radii, then it would be a simple.! Has more data than eek orange dot in the cardboard to form foci... - calculate the approximate circumference of an ellipse, you can work the... A circle or ellipse is represented by its eccentricity, which for an ellipse is x. 1 '' above ) the rug licking my balls for the entries a conic section whose plane circumference of ellipse such... Focus ( plural: foci ) of the longest long axis, CD be a task., then it would be a simple task, one formula is accurate! Parallel to the major axis and minor axes formula for calculating the circumference of ellipse. Line, and its length is dependent on the orientation of the ellipse high accuracy for the! Formula: c = 2π × \sqrt { \frac { a^2 + }. Not expressable by elementary functions given its circumference plus the length of the ellipse can be helpful for designing cutouts.
|
2021-10-20 19:20:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132274389266968, "perplexity": 849.078320524613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585348.66/warc/CC-MAIN-20211020183354-20211020213354-00638.warc.gz"}
|
http://math.stackexchange.com/questions/6268/does-a-continuous-and-1-1-function-map-borel-sets-to-borel-sets/6269
|
# Does a continuous and 1-1 function map Borel sets to Borel sets?
Suppose $f: \mathbb{R} \to \mathbb{R}$ is a continuous function which is 1-1, then does $f$ map Borel sets onto Borel sets?
-
The continuous image of a Borel set in a Polish space is analytic. So it is Lebesgue measurable. If the complement of that analytic set is analytic then it is Borel (a theorem by Suslin). Now we know that for injective functions we have $f(A \setminus B) = f(A) \setminus f(B)$ so the complement is analytic too, hence the continuous one-to-one image of a Borel set is Borel in a Polish space.
Maybe this is unnecessarly cluttered but it is the thing that pops up in my mind.
Edit:
Note that $f$ maps open sets to open sets. This is because injective functions $f:\mathbb{R} \to \mathbb{R}$ are strictly monotonic, so it maps open intervals to open intervals.
Define $D := \{Q \in P(\mathbb{R}) : f(Q) \textrm{ is Borel}\}$.
Now $\mathbb{R}$ is in $D$, it is closed under complements and unions (because it is injective), thus $D$ is a $\sigma$-algebra. So let $B$ be the Borel $\sigma$-algebra and $O$ the open sets of $\mathbb{R}$. Then $B \subset D$ if $O \subset D$. So we get that for all Borel sets the image is Borel.
-
Good answer. Another way to put things (but with less details) is that $f$ is a homeomorphism (because as you say it is an open map), i.e., its inverse is continuous, hence its inverse is Borel measurable, hence $f$ maps Borels to Borels. – Jonas Meyer Oct 7 '10 at 15:40
But $f$ is not surjective? So how can we talk about an "inverse"? – Jonas Teuwen Oct 7 '10 at 15:42
The inverse is defined on its image, which is an open interval. – Jonas Meyer Oct 7 '10 at 15:43
Ah, right, I didn't think about that long enough, I thought you would get problems with the domain... Nice. – Jonas Teuwen Oct 7 '10 at 15:49
You meant saying that continuous and injective. Otherwise you can cook some example by "switching" two intervals off the identity map. – Asaf Karagila Oct 7 '10 at 17:03
Note, you also need the fact that $f(\mathbb{R})$ is borel. This fortunately is so, as $\mathbb{R}$ is $\sigma$-compact, and the continuous image of a compact set is compact, thus closed (since we are working in hausdorff spaces), thus borel.
-
|
2014-07-28 04:31:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.987017035484314, "perplexity": 298.63444502057985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510256737.1/warc/CC-MAIN-20140728011736-00398-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://dsp.stackexchange.com/questions/63047/what-did-i-do-wrong-with-this-simple-filter-build
|
# What did I do wrong with this simple filter build?
I tried to put everything I have learned from people here together to code my first filter from scratch. Unfortunately, it didn't go well and I'm not getting the expected output. The math/code became quite big and messy, which makes it hard to read. But I went through it a few times and couldn't find an error. I'm not sure what I've done wrong. I feel like perhaps I have a big idea wrong about how to do this. Maybe I took a wrong turn somewhere.
Here is the circuit:
This is for a physical modeling purpose, so it is necessary to have everything in terms of the component resistor, capacitor, and inductor values as drawn.
I attempted to work through this by first solving the impedance of the parallel and series components, working out a Laplace transfer function, then substituting $$s=\frac{1-z^{-1}}{T}$$, expressing it in terms of per sample input/output delay, and then writing some simple C++ for it.
But the output is just an impulse of sound then it overloads. It's not filtering as expected. Any ideas what I've done wrong?
I've tried to write this out as clearly as I can in terms of the steps I've taken in the hopes it might make sense.
Parallel Component:
$$\frac{1}{R_1} = \frac{1}{sL} + \frac{1}{R_d+\frac{1}{sC}}$$
$$R_1 = \frac{1}{\frac{1}{sL} + \frac{1}{R_d+\frac{1}{sC}}}$$
$$R_1 = \frac{Ls(CR_ds + 1)}{CLs^{2} + CR_ds+1}$$
Series Component:
$$R_2 = 2R_g$$
Transfer Function:
$$V_{out} = \frac{R_2}{R_2+R_1} V_{in}$$
$$V_{out} = \frac{2R_g}{2R_g + \frac{Ls(CR_ds + 1)}{CLs^{2} + CR_ds + 1}} V_{in}$$
$$V_{out}(s) = \frac{2R_g(CLs^{2} + CR_ds + 1)}{2CR_gLs^{2} + 2CR_gR_ds+CLR_ds^{2} + 2R_g + Ls} V_{in}(s)$$
Substituting $$s=\frac{1-z^{-1}}{T}$$:
$$_{Numerator} = \frac{-2 R_g (-C L z^{-2} + 2 C L z^{-1} - C L + C R_d T z^{-1} - C R_d T - T^{2})}{T^{2}}$$
$$_{Denominator} = \frac{2 C R_g L z^{-2} - 4 C R_g L z^{-1} + 2 C R_g L + 2 C R_g T R_d - 2 C R_g T R_dz^{-1} + C L R_d- 2C L R_dz^{-1}+C L R_dz^{-2} + 2 R_g T^2 - L T z^{-1} + L T}{T^2}$$
Canceling the $$1/T^{2}$$:
$$_{Numerator} = -2 R_g (-C L z^{-2} + 2 C L z^{-1} - C L + C R_d T z^{-1} - C R_d T - T^{2})$$
$$_{Denominator} = 2 C R_g L z^{-2} - 4 C R_g L z^{-1} + 2 C R_g L + 2 C R_g T R_d - 2 C R_g T R_dz^{-1} + C L R_d-2C L R_dz^{-1}+C L R_dz^{-2} + 2 R_g T^2 - L T z^{-1} + L T$$
Cross Multiplying:
$$_{Leftside} = 2 C R_g L V_{out}[n-2] - 4 C R_g L V_{out}[n-1] + 2 C R_g LV_{out}[n] + 2 C R_g T R_dV_{out}[n] - 2 C R_g T R_dV_{out}[n-1] + C L R_dV_{out}[n]-2C L R_dV_{out}[n-1]+C L R_dV_{out}[n-2] + 2 R_g T^2V_{out}[n] - L T V_{out}[n-1] + L TV_{out}[n]$$
$$_{Leftside} = V_{out}[n] (2 C R_g L + 2 C R_g T R_d +C L R_d + 2 R_g T^2 + L T) + 2 C R_g L V_{out}[n-2] - 4 C R_g L V_{out}[n-1] - 2 C R_g T R_dV_{out}[n-1] -2C L R_dV_{out}[n-1]+C L R_dV_{out}[n-2] - L T V_{out}[n-1]$$
$$_{Rightside} = -2 R_g (-C L V_{in}[n-2] + 2 C L V_{in}[n-1] - C LV_{in}[n] + C R_d T V_{in}[n-1] - C R_d TV_{in}[n] - T^{2}V_{in}[n])$$
Final Equation:
$$V_{out}[n] (2 C R_g L + 2 C R_g T R_d +C L R_d + 2 R_g T^2 + L T) = -2 R_g (-C L V_{in}[n-2] + 2 C L V_{in}[n-1] - C LV_{in}[n] + C R_d T V_{in}[n-1] - C R_d TV_{in}[n] - T^{2}V_{in}[n]) - (2 C R_g L V_{out}[n-2] - 4 C R_g L V_{out}[n-1] - 2 C R_g T R_dV_{out}[n-1] -2C L R_dV_{out}[n-1]+C L R_dV_{out}[n-2] - L T V_{out}[n-1])$$
$$V_{out}[n] = \frac{-2 R_g (-C L V_{in}[n-2] + 2 C L V_{in}[n-1] - C LV_{in}[n] + C R_d T V_{in}[n-1] - C R_d TV_{in}[n] - T^{2}V_{in}[n]) - (2 C R_g L V_{out}[n-2] - 4 C R_g L V_{out}[n-1] - 2 C R_g T R_dV_{out}[n-1] -2C L R_dV_{out}[n-1]+C L R_dV_{out}[n-2] - L T V_{out}[n-1])}{2 C R_g L + 2 C R_g T R_d +C L R_d + 2 R_g T^2 + L T }$$
Code:
class PhysicalFilter{
public:
void setSampleRate(double sampleRateIn){
T = 1/sampleRateIn;
}
float filterSample(float inputSample, float C, float L, float R_d, float R_g){
input_2 = input_1;
input_1 = input;
input = inputSample;
output_2 = output_1;
output_1 = output;
float numerator = -2 * R_g * ((-C * L * input_2) + (2 * C * L * input_1) - (C * L * input) + (C * R_d * T * input_1) - (C * R_d * T * input) - (T * T * input)) - ((2 * C * R_g * L * output_2) - (4 * C * R_g * L * output_1) - (2 * C * R_g * T * R_d * output_1) - (2 * C * L * R_d * output_1) + (C * L * R_d * output_2) - (L * T * output_1));
float denominator = (2 * C * R_g * L) + (2 * C * R_g * T * R_d) + (C * L * R_d) + (2 * R_g * T * T) + (L * T);
output = numerator/denominator;
return output;
}
private:
float input = 0.f;
float input_1 = 0.f;
float input_2 = 0.f;
float output = 0.f;
float output_1 = 0.f;
float output_2 = 0.f;
float C = 1.f;
float L = 1.f;
float R_g = 1.f;
float R_d = 1.f;
float T = 1/44100.f;
}
• Your confusing time and frequency domain. You should NEVER have to divide. This is a simple difference equation, the "denominator" should be subtracted, not divided. Try your code first with a very simple first order filter – Hilmar Jan 5 at 22:45
• @Hilmar: At a first glance I thought the same, but the denominator is just a constant, so it's correct. – Matt L. Jan 6 at 6:30
Your analog transfer function looks OK. For the sake of clarity - and to reduce the chance of making errors - I'd just rewrite it as
$$H_a(s)=G\cdot\frac{s^2+as + b}{s^2+cs + d}\tag{1}$$
with
\begin{align}G&=\frac{2R_g}{R_d+2R_g}\\a&=\frac{R_d}{L}\\b&=\frac{1}{LC}\\c&=G\left(a+\frac{1}{2R_gC}\right)\\d&=G\cdot b\frac{}{}\end{align}
Then you can use the backward Euler transformation on the general second-order transfer function given by $$(1)$$, resulting in
$$H_d(z)=\frac{G}{1+cT+dT^2}\cdot\frac{1+aT+bT^2-(2+aT)z^{-1}+z^{-2}}{1-\frac{2+cT}{1+cT+dT^2}z^{-1}+\frac{1}{1+cT+dT^2}z^{-2}}\tag{2}$$
Now you can check if your discrete-time transfer function is correct.
The next step is to check your code. I would suggest to rewrite it to implement a general biquad transfer function:
$$H(z)=\frac{b_0+b_1z^{-1}+b_2z^{-2}}{1+a_1z^{-1}+a_2z^{-2}}\tag{3}$$
You can test your biquad routine by supplying some known coefficients $$b_i$$ and $$a_i$$ (e.g., for a standard low pass filter, etc.). As soon as you're convinced that the routine works properly, you can test it with the coefficients of your design. With what you have now you can't really separate the design and the implementation, so testing and debugging becomes problematic.
• Thanks Matt! I followed that approach and also the "how to apply a biquad" here: dsprelated.com/freebooks/filters/BiQuad_Section.html and it seems to have worked nicely. I haven't hooked it up to a spectrogram but it's functioning and outputting audio that sounds like one would expect. Appreciate the guidance. It's much easier when you do it with standardized coefficients like that and it also makes it more efficient because then I only have to calculate the sets of coefficients when changing parameters. Obviously this makes it much cheaper per sample for processing. Thanks again. – mike Jan 5 at 23:50
|
2020-05-29 11:29:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6176224946975708, "perplexity": 1109.0678261854016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347402885.41/warc/CC-MAIN-20200529085930-20200529115930-00460.warc.gz"}
|
https://deepai.org/publication/impulsive-control-for-g-aimd-dynamics-with-relaxed-and-hard-constraints
|
# Impulsive Control for G-AIMD Dynamics with Relaxed and Hard Constraints
Motivated by various applications from Internet congestion control to power control in smart grids and electric vehicle charging, we study Generalized Additive Increase Multiplicative Decrease (G-AIMD) dynamics under impulsive control in continuous time with the time average alpha-fairness criterion. We first show that the control under relaxed constraints can be described by a threshold. Then, we propose a Whittle-type index heuristic for the hard constraint problem. We prove that in the homogeneous case the index policy is asymptotically optimal when the number of users is large.
## Authors
• 24 publications
• 1 publication
• 174 publications
07/01/2018
### Asymptotically optimal delay-aware scheduling in wireless networks
In this paper, we investigate a channel allocation problem in networks t...
01/09/2020
### Regularity and stability of feedback relaxed controls
This paper proposes a relaxed control regularization with general explor...
04/25/2019
### Continuous-Time Mean-Variance Portfolio Selection: A Reinforcement Learning Framework
We approach the continuous-time mean-variance (MV) portfolio selection w...
09/22/2018
### Optimizing a Generalized Gini Index in Stable Marriage Problems: NP-Hardness, Approximation and a Polynomial Time Special Case
This paper deals with fairness in stable marriage problems. The idea stu...
03/16/2018
### A New Result on the Complexity of Heuristic Estimates for the A* Algorithm
Relaxed models are abstract problem descriptions generated by ignoring c...
11/15/2017
### A Stochastic Resource-Sharing Network for Electric Vehicle Charging
We consider a distribution grid used to charge electric vehicles subject...
10/16/2019
### Trends in the optimal location and sizing of electrical units in smart grids using meta-heuristic algorithms
The development of smart grids has effectively transformed the tradition...
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## I Introduction
For nearly two decades Additive Increase Multiple Decrease (AIMD) mechanism was one of the main components in the TCP/IP protocol regulating data traffic across the Internet [25]. In the absence of significant queueing delay, AIMD increases the data sending rate linearly in time until packet loss and then drastically, in a multiplicative fashion, reduces the sending rate. However, in the most recent versions of TCP (Compound [26] in Windows and Cubic [12] in Linux), the linear growth function has been changed to non-linear functions to enable agile adaptation of the data sending rate. Such modifications can be viewed as particular cases of Non-linear AIMD (NAIMD) dynamics. The possibilities of non-linear modifications of AIMD are really endless. A thorough classification of NAIMD dynamics, together with the analysis of some NAIMD classes, can be found in the book [9]. Here we consider one important class of NAIMD dynamics, which we refer to as Generalized AIMD (G-AIMD) [6]. In the G-AIMD dynamics the acceleration of the sending rate in the increase phase depends on the current value of the rate.
The other important recent development in the Internet architecture is the introduction of Software-Defined Networking (SDN) technology [18]. The SDN technology allows much finer control of resource allocation (e.g., bandwidth allocation) in a network. Motivated by this opportunity, in the present work we study the control of G-AIMD dynamics. In the networking context, when allocating resource, it is very common to use some fairness function as optimization objective. In the foundational work [15], the authors proposed to use proportional fairness in the context of the network utility maximization problem. Then, in [22] the -fairness function was proposed, which generalizes the proportional fairness and gives max-min fairness and delay fairness as the other important particular cases. A very good review of the network utility maximization problem can be found in [24]. Most of the works on the resource allocation problem concern with long-term fairness, which ignores instantaneous oscillations of the sending rate or short-term fairness. Short-term fairness is particularly important in wireless and electrical networks. Following [2], in this work we optimize the integral of the -fairness function over time, which represents short-term fairness.
We would like to note that recently AIMD and more generally NAIMD found new applications in smart electrical grids [10, 13, 17] and in power control for charging electric vehicles [9, 11, 23]. We hope that our findings will also be useful in these application domains.
Let us specifically describe our contributions: in the next section we formulate the problem of short-term -fairness for resource allocation among G-AIMD users as an impulsive control problem under constraints with time average criterion. We would like to note that our impulsive control is different from the standard impulsive control setting [21], where there is a constraint on the number of impulses or on the total variation of the impulse control. In our case, we have only a constraint on the system state. The present work also represents an advance with respect to our previous work [6], where we have not considered the setting with constraints. Here we consider both hard and relaxed constraints. In Section III we show that in the case of the relaxed constraints, the optimal impulsive control of the G-AIMD dynamics can be given in the threshold form. Then, in Section IV we propose a heuristic, which is similar in spirit to the celebrated Whittle index [28]. We would like to note that in the past several attempts to prove indexability of AIMD [8, 14] and G-AIMD [5] dynamics have been made. However, to the best of our knowledge, it is for the first time that we prove the indexability of the G-AIMD dynamics without any artificial conditions. We were able to make this theoretical advance largely thanks to the framework of impulsive control in the continuous time. The previous works on TCP indexability are all in discrete time, and some are also in the discrete state space but [5] is in the continuous state space. Similarly to [27], we are able to show that in the homogeneous case the index policy is asymptotic optimal in the regime of a large number of users. As a by-product, we prove the global stability of the AIMD dynamics and the local asymptotic stability of the G-AIMD dynamics under the index policy in the homogeneous setting. This extends the work [3] on the reduce max rate policy, where only the existence and uniqueness of a fixed point was shown but the stability in the deterministic setting was not investigated. We conclude the paper in Section V with future research directions.
## Ii Model and problem formulation
Let us consider a Generalized Additive Increase Multiplicative Decrease (G-AIMD) dynamics with users in continuous time. In the absence of control signal, the allocation to user (e.g., transmission rate in Internet congestion control or instantaneous power in charging stations for electric vehicles) increases according to the differential equation:
dxkdt=akxγkk, (1)
with and . Continuous-time models represent well the TCP sending rate evolution on the scale of several round-trip times [7, 29].
We consider impulsive control. Namely, when the control signal (impulse) is sent to user at time , the resource allocation to user drastically decreases according to
xk(t+0)=bkxk(t), (2)
with . We note that the above dynamics is fairly general and covers at least three important particular cases: if we retrieve the classical Additive Increase Multiplicative Decrease (AIMD) mechanism [25], corresponds to Compound TCP [26] when queueing delays are not large, and corresponds to the Multiplicative Increase Multiplicative Decrease (MIMD) mechanism or Scalable TCP [16]. MIMD is a very aggressive dynamics [1] and, in contrast, AIMD is much more gentle. Compound TCP is designed to represent a good balance between the two extremes.
Let us define formally a class of policies slightly larger than the class of purely deterministic policies. The need for such a class of policies will be clear from the subsequent development.
###### Definition 1
• Let be fixed. For user a policy is a sequence, say
of time moments, when an impulse (a multiplicative decrease in his sending rate) is applied. Here
is a monotone nondecreasing sequence of constants in . It is possible that multiple impulses are applied at a single time moment, but we require .
• Let and be two policies for user . Then, for each we denote by
a mixture of the two policies, which with probability
chooses the sequence and with the complementary probability chooses the sequence . For user , we denote by (resp., ) the set of policies (resp. all such mixed policies for all ).
• We introduce the notation and
Let be fixed. Each policy defines the dynamics of (stochastic, if ), and the corresponding expectation is denoted by .
Let us denote by
the vector of resource allocations at time
. Ideally, at each time moment we aim to operate the system under the constraint:
N∑k=1xk(t)≤c,∀t, (3)
where is the resource (e.g., transmission capacity or electric power). It appears that if we substitute the above hard constraint with a soft time-averaged constraint, the problem becomes more tractable. Namely, consider
N∑k=1limsupτ→∞1τEuk[∫τ0xk(t)dt]≤c.
Our first objective is to propose an impulsive control in closed form, which solves the following constrained problem:
J(u):=N∑k=1liminfτ→∞1τEuk[∫τ0x1−αk(t)1−αdt]⟶supu∈U, subject to:~{}N∑k=1limsupτ→∞1τEuk[∫τ0xk(t)dt]≤c. (4)
Here the initial state is arbitrarily fixed, and will not be indicated, and the generic notation is in use. For each is the short-term -fairness [2]. The short-term -fairness is a versatile fairness concept, which retrieves as particular cases: proportional fairness (), delay-based fairness () and max-min fairness ().
In order to deal with the control problem under constraints, we use the multiobjective optimization approach. To this end, let us define the two competing objectives:
J(u)=N∑k=1Jk(uk) := G(u)=N∑k=1Gk(uk) :=
It appears that it is more convenient to consider instead of , which leads to the standard multiobjective problem:
−J(u) → infu∈U,G(u) → infu∈U.
Throughout this paper, we assume the following
###### Assumption 1
for each
The particular cases excluded by the assumption can be separately analyzed using similar techniques. We exclude such cases for the sake of presentation smoothness and because of space limitation.
## Iii Control in the relaxed case
Let us formally justify the reduction of the problem with the relaxed constraint to the multiobjective formulation and demonstrate how the original solution can be reconstructed.
To scalarize the multiobjective problem, we introduce the variable weight and consider the combined criterion
L(λ,u)=N∑k=1Lk(λ,uk) := N∑k=1(−Jk(uk)+λGk(uk))→infu∈U.
Note that the above problem reduces to subproblems: for each
−Jk(uk)+λGk(uk)→infuk∈Uk. (5)
###### Lemma 1
For each and an optimal policy for problem (5) is of threshold type with the threshold given by
¯xk(λ)=⎧⎨⎩(2−γk)(1−b2−α−γkk)(1−b2−γkk)(2−α−γk)λ⎫⎬⎭1α. (6)
In greater details, under this threshold policy, the user decreases the sending rate at time as soon as . (It is clear that this threshold policy induces a policy in )
Proof. Let be fixed. As was shown in [6] (see there Theorem 3.1), the policy say defined by the threshold given by (6) is optimal to the following problem
limsupτ→∞1τEuk[∫τ0(−x1−αk(t)1−α+λxk(t))dt]→infU′k. (7)
It is clear that this policy is also optimal to the above problem but out of all the policies i.e.,
limsupτ→∞1τEuk[∫τ0(−x1−αk(t)1−α+λxk(t))dt]→infUk.
(In fact, if it is outperformed by a mixed policy, then there must be another deterministic policy outperforming this threshold policy, which contradicts the optimality of the threshold policy out of ) Then
−Jk(u∗k)+λGk(u∗k) = limτ→∞1τEu∗k[∫τ0(−x1−αk(t)1−α+λxk(t))dt] ≤ limsupτ→∞1τEuk[∫τ0(−x1−αk(t)1−α+λxk(t))dt] ≤ −Jk(uk)+λGk(uk)
for each .
###### Lemma 2
For each and under the threshold policy given by (6), if
−J∗k(λ):=−Jk(u∗k) = −[¯xk(λ)]1−α(1−b2−α−γkk)(1−γk)(1−α)(2−α−γk)(1−b1−γkk), G∗k(λ):=Gk(u∗k) = ¯xk(λ)(1−b2−γkk)(2−γk)(1−γk)(1−b1−γkk), L∗k(λ):=Jk(u∗k)+λGk(u∗k) = −¯xk(λ)λα1−α(1−b2−γkk)(2−γk)(1−γk)(1−b1−γkk);
and if
−J∗k(λ)=[¯xk(λ)]1−α1−b1−αk(1−α)2lnbk, G∗k(λ)=¯xk(λ)bk−1lnbk, L∗k(λ)=−¯xk(λ)λα1−αbk−1lnbk.
Proof. The details needed for the derivation of the above objectives can be found in [6].
Let us now investigate the trade off against . We consider two cases (a) and (b) separately. The following two observations hold for
• : by equation (6), if then and consequently ; at the same time, and consequently . Now if then and ; and at the same time and consequently .
• : Again by equation (6), if then and consequently . However, in this case and consequently . Now if then and ; and at the same time and consequently .
Next we establish the convexity of the epigraph.
###### Lemma 3
For each , legitimately regarded as a function of , is convex. Moreover, its epigraph coincides with the convex hull of its graph.
Proof: To prove the convexity, it will be more convenient to consider the parametrization with respect to . We note that since there is a one-to-one correspondence between and , the two parametrizations are equivalent. Observe that
−J∗k(¯xk)=−c1¯x1−αk1−α,c1>0, G∗k(¯xk)=c2¯xk,c2>0,
where the constants come from (2) (resp., (2)) when (resp. ). Thus, we can write
G∗k(−J∗k)=c2[(−J∗k)(1−α)−c1]11−α.
Hence,
dG∗kd(−J∗k)=−c2c1[(−J∗k)(1−α)−c1]α1−α,
and
d2G∗kd(−J∗k)2=αc2c21[(−J∗k)(1−α)−c1]2α−11−α>0,
since always.
The last assertion follows from the two observations before this lemma: there is no asymptote if , and the same conclusion holds if , too. Examples of the epigraph in the two cases are displayed in Figures 1.(a) and 1.(b). This completes the proof.
###### Remark 1
For each denote by the convex hull of the graph (or equivalently the epigraph, according to the previous lemma) of as a function of . It can be seen that . Indeed, if there is some such that , it can only lie below the graph of against , but then for some it contradicts the fact that the threshold policy given by is optimal for problem (5) with the same This observation is important for the argument below.
Now we consider problem (II), and reformulate it in the space of performance vectors. That is, we reformulate
{−J(u)=∑Nk=1(−Jk(uk))→infu∈U,G(u)−c=∑Nk=1Gk(uk)−c≤0,
as
{−~J(ω)→infω∈Ω,~G(ω)−c≤0, (10)
where
−~J(ω):=N∑k=1ω1k, ~G(ω)=N∑k=1ω2k,
and where
ω={(ω1k,ω2k)}Nk=1∈Ω:=N∏k=1Ωk⊂R2N.
In fact, these two problems are equivalent because of the following. For each there exists some such that and , and conversely for each there exists some satisfying and ; recall Remark 1. However, the correspondence may be not one-to-one.
We shall effectively solve problem (10), whose optimal solution then induces one to problem (II).
The main statement is now in position.
###### Theorem 1
The following assertions hold.
• The set is convex in , the functions and on are convex and real-valued.
• There exists some such that , i.e., Slater’s condition for problem (10) is satisfied.
• The threshold policy is optimal for problem (II), where for each is induced by the threshold , with
λ∗=1cα(∑k=1,…,N:γk≠1(1−γk)(2−γk)(1−b2−γkk)(1−b1−γkk) ×⎛⎝2−γk1−b2−γkk⎞⎠1/α⎛⎝1−b2−α−γkk2−α−γk⎞⎠1/α +∑k=1,…,N:γk=1(1−b1−αk1−α)1/α(1−bk)(1−α)/α(−lnbk))α
In the homogeneous case ( and ) the expression becomes even simpler:
λ∗ = Nαcα(1−γ)α(1−b1−γ)α(1−b2−γ)α(2−γ)α ×(2−γ)(1−b2−γ)(1−b2−α−γ)(2−α−γ),
and, consequently,
¯xk(λ∗)=cN1−b1−γ1−γ2−γ1−b2−γ, (12)
in case and
λ∗ = 1cα(N(1−b1−α)1/α(b−1)((1−b)(1−α))1/αlnb)α.
in case
Proof. Part (a) is evident. For part (b), note that one can take such that
∀k∈{1,...,N},^ω2k
This is possible because approaches zero when . Thus, Slater’s condition is satisfied.
The rest of this proof verifies part (c). For each let be generated by the threshold policy determined by the threshold , We solve
~G(ω∗(λ))=c (13)
for given by (1). Then it holds that
−~J(ω∗(λ∗))+λ∗(~G(ω∗(λ∗))−c) (14) ≤ −~J(ω)+λ∗(~G(ω)−c), ∀ ω∈Ω,
by Lemma 1. According to Theorem 1 of Section 8.4 in [19], this shows that solves problem (10). Part (c) immediately follows.
Consider given by (1). According to (14) and that (13) is satisfied by , we see
μ0=infω∈Ω{−~J(ω)+λ∗(~G(ω)−c)},
where
μ0=inf−~J(ω), subject to ω∈Ω, ~G(ω)≤c.
Any constant satisfying the above equality with being replaced by is sometimes called a geometric multiplier for problem (10), see Definition 6.1.1 of [4]. The following result from [19, Thm. 1 in Sect. 8.3], see the proof therein, shows that is the unique geometric multiplier for problem (10).
###### Proposition 1
Let be a convex set. Let be a real-valued convex function on and be a real-valued convex function on . Assume the existence of a point for which . Let
μ0=inff(ω), subject to ω∈Ω, G(ω)≤0, (15)
and assume is finite. Then there is a number such that
μ0=infω∈Ω{f(ω)+λ′G(ω)}, (16)
and thus a geometric multiplier exists. Furthermore, for each geometric multiplier if the infimum is achieved in (15) by an , , it is achieved by in (16) and
λ′G(ω∗)=0.
###### Corollary 1
given by (1) is the unique geometric multiplier for problem (10)
Proof. Suppose is a geometric multiplier for problem (10). Let be as in the proof of Theorem 1. Let us verify that Suppose for contradiction that . Consider the case of Remember, is finite. However, since is a geometric multiplier,
∀k∈{1,...,N},infωk∈Ωk[ω1k]=−∞,
and
μ0=infω∈Ω[−~J(ω)]=−∞,
which leads to a contradiction. Consider the case of . Then is strictly positive. However,
∀k∈{1,...,N},infωk∈Ωk[ω1k]=0,
and since is a geometric multiplier,
μ0=infω∈Ω[−~J(ω)]=0,
Thus, According to Proposition 1, satisfies , which admits the unique solution
The above results call for a number of distributed control algorithms. At first, let us suppose that the numbers of users’ types are known to all users or broadcasted to the users by a central authority (e.g., SDN controller). Then, each user can calculate its threshold by (6),(1) and can control his rate by reducing it when the threshold is achieved. Thus, except for the complete initial knowledge of the system’s parameters, no further exchange of information is required.
Then, another interesting case is when each user knows its individual parameters but not the parameters of the other users. In this case, the central controller can calculate the Lagrange multiplier by equation (1) and distribute it to the users.
## Iv Index policy for hard constraint
Since is monotone and decreasing function of , the comparison of with provides the optimal solution for the relaxed problem formulation. What is more, the fact that is a monotone and decreasing function implies indexability of the problem with hard constraint [28].
Then, we can propose the following heuristic for the case of hard constraint [28]: whenever the hard constraint (3) is achieved, the user with the minimal value of reduces his rate. Let us call the resulting policy the Whittle-type index policy or briefly the index policy.
It is very intriguing to observe that the expression for contains neither the parameters of the other users nor the number of users. Therefore, may be the Whittle index type approach can be very useful in the adaptive scenario when the number of users changes with time.
From now on, in this section, we consider the homogeneous case, i.e., we suppose and for each . This is the standard first step in the analysis of index policies [27]. As previously, Assumption 1 is supposed to hold without explicit references. It is without loss of generality to assume .
Let be the index policy. Let be the threshold policy obtained in Theorem 1, which is optimal for problem (II). Note that the index policy satisfies the hard capacity constraint (3). Therefore, denoting as the class of policies satisfying the hard capacity constraint (3), one has
J(uind,x,c,N)≤supu∈UHJ(u,x,c,N)≤J(u∗,x,c,N),
for each initial state , capacity constraint , and the number of users , which we signify in this section for the following reason. Our objective is to show that the index policy is asymptotically optimal in the following sense:
limN→∞1NJ(uind,x,cN,N)=limN→∞1NJ(u∗,x,cN,N). (17)
In the important case of corresponding to the AIMD dynamics, we show that the index policy is asymptotically optimal for each initial state, and in case of , we show that it is asymptotically optimal for the initial states close enough to the steady state.
### Iv-a The AIMD (γ=0) case
Suppose and for each .
We first observe that since is monotone and decreasing in the homogeneous case the index policy is equivalent to the policy that reduces the maximal sending rate at the moment when the hard constraint is achieved. Let us now consider, under the index policy, the sequence of the sending rates, observed at each time when the capacity constraint is met. Following [3], for each such that
~x≥~x2≥⋯≥~xN>0; N∑i=1~xi=c, (18)
we introduce
g(~x):=(g1(~x),…,gN(~x))
defined in the following way. If
~xk≥b~x1>~xk+1 (19)
for some with the convention then
⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩g1(~x):=~x2+b1~x1,⋮gk−1(~x):=~xk+b1~x1,gk(~x):=b~x1+b1~x1,gk+1(~x):=~xk+1+b1~x1,⋮gN(~x):=~xN+b1~x1,
where the last two lines are not relevant if and is a constant. Note that if denotes the time duration since the reduction of according to the index policy until the next time when the hard capacity constraint is met, then
b1~x1=(1−b)~x1N=aΔ(~x). (20)
The interpretation of is the vector of the ordered sending rates from the largest to the smallest one, when the next time the hard capacity constraint is met (before the reduction), starting from Put , with is a fixed vector satisfying (18),
~x(m):=g(~x(m−1))=:g(m)(~x), m≥1,
we are interested in as Let us introduce for each vector satisfying (18) as the integer satisfying (19).
###### Theorem 2
Suppose and for each . Then the mapping has a unique fixed point, say , in the space of vectors satisfying (18), given by
~x∗n=(b+(N−n+1)(1−b)N)cNb+(N+1)(1−b)2, (21)
, and as
Proof. Firstly, note that there exists some integer such that for otherwise, the sending rate of some user would have blown up to , violating the hard capacity constraint.
Next, observe that if for some then as well. Indeed, if this was not the case, then we would have
b~x(m+1)1=b(~x(m)2+b1~x(m)1)>b~x(m)1+b1~x(m)1
and thus
0≥b(~x(m)2−~x(m)1)>b1(~x(m)1−b~x(m)1)>0,
which is a desired contradiction. Therefore, for some and all subsequent steps, the maximal sending rate (before reduction) when the hard capacity constraint is met will become the minimal sending rate (just after reduction).
Consequently, for all large enough we have
~x(m+1)=g(~x(m)) = ⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝b110…0b101…0⋮⋮⋮…⋮b100…1b1+b00…0⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠~x(m),
Since the matrix
A=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝b110…0b101…0⋮⋮⋮…⋮b100…1b1+b00…0⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠
is an aperiodic irreducible (column) stochastic matrix, we conclude that
converges to the unique fixed point of in the space of vectors satisfying (18).
Let us compute the fixed point by solving the following system:
⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩~x∗1:=~x∗2+b1~x∗1,⋮~x∗2:=~x∗3+b1~x∗1,⋮~x∗N:=b~x∗1+b1~x∗1,∑Ni=1~x∗i=c,
which gives
{~x∗n:=(b+(N−n+1)b1)~x∗1, 1≤n≤N,∑Ni=1~x∗i=c,
Therefore,
~x∗1=c∑Ni=1(b+ib1)=cNb+(N+1)(1−b)2, ~x∗n=(b+(N−n+1)(1−b)N)cNb+(N+1)(1−b)2, 2≤n≤N,
see (21).
We remark that the reduce maximal sending rate policy was investigated in [3]. There only the existence and uniqueness of the fixed point (21) was established but the convergence or the absence of cycling behaviour was not shown.
Next, we shall scale the capacity constraint by a multiplicative constant . When we do such scaling, it is convenient to signify the dependence of
|
2022-05-18 10:22:43
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8204809427261353, "perplexity": 868.4171313635721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521883.7/warc/CC-MAIN-20220518083841-20220518113841-00589.warc.gz"}
|
https://www.open-systems-pharmacology.org/OSPSuite-R/reference/createSimulationBatch.html
|
Creates and returns an instance of a SimulationBatch that can be used to efficiently vary parameters and initial values in a simulation
## Usage
createSimulationBatch(
simulation,
parametersOrPaths = NULL,
moleculesOrPaths = NULL
)
## Arguments
simulation
Instance of a Simulation to simulate in a batch mode
parametersOrPaths
Parameter instances (element or vector) typically retrieved using getAllParametersMatching or parameter path (element or vector of strings) that will be varied in the simulation. (optional) When providing the paths, only absolute full paths are supported (i.e., no matching with '*' possible). If parametersOrPaths is NULL, you will not be able to set parameter values during batch run.
moleculesOrPaths
Molecule instances (element or vector) typically retrieved using getAllMoleculesMatching or molecule path (element or vector of strings) that will be varied in the simulation. (optional) When providing the paths, only absolute full paths are supported (i.e., no matching with '*' possible). If moleculesOrPaths is NULL, you will not be able to set molecule initial values during batch run.
## Value
SimulationBatch that can be used to vary parameter values or molecule initial values and run simulation in an optimized manner
## Examples
simPath <- system.file("extdata", "simple.pkml", package = "ospsuite")
|
2022-12-05 22:11:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6754980683326721, "perplexity": 4457.216338446705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711045.18/warc/CC-MAIN-20221205200634-20221205230634-00350.warc.gz"}
|
https://www.sas1946.com/main/index.php?topic=61106.msg678953
|
• July 02, 2020, 01:38:00 PM
• Welcome, Guest
Pages: 1 ... 9 10 11 [12] 13 Go Down
### AuthorTopic: B.A.T. V3.6 THE BLITZ (Read 36370 times)
0 Members and 1 Guest are viewing this topic.
#### Whacker
• member
• Offline
• Posts: 53
##### Re: B.A.T. V3.6 THE BLITZ + V3.6.2 PATCH
« Reply #132 on: June 03, 2019, 01:34:41 AM »
OK, so completely loaded question here, just out of curiosity since I'm moderately technically literate and know a wee bit of programming.
When we say "reworked the classes", what does that mean, roughly? Was it something to do with how the game handled aircraft in memory? For example, I'm imagining if I have 10 BF109s in a melee, and say each BF109 takes up 10 meg of RAM when each instance is created... did we figure out a way to just load a single reference and then create instances to point to that same reference? Or something completely else? INQUISITIVE MINDS WANT TO KNOW!
Congrats btw guys, this is why I keep coming back to this game year after year.
Logged
#### VF19_Congo
• member
• Offline
• Posts: 96
• Uh... Wwhuut?
##### Re: B.A.T. V3.6 THE BLITZ + V3.6.2 PATCH
« Reply #133 on: June 05, 2019, 04:45:00 AM »
OK, so completely loaded question here, just out of curiosity since I'm moderately technically literate and know a wee bit of programming.
When we say "reworked the classes", what does that mean, roughly? Was it something to do with how the game handled aircraft in memory? For example, I'm imagining if I have 10 BF109s in a melee, and say each BF109 takes up 10 meg of RAM when each instance is created... did we figure out a way to just load a single reference and then create instances to point to that same reference? Or something completely else? INQUISITIVE MINDS WANT TO KNOW!
Congrats btw guys, this is why I keep coming back to this game year after year.
From the first post in this thread:
"This allows for more aircraft or objects in any given mission and has pushed the boundary of performance far out,
so it will not be a concern any more for the foreseeable future."
Also, does this have anything to do with RAM usage in mission? I thought that every object in use consumed ram,
so, if more stuff is going on in the mission, more ram usage ?
How far can we push ram usage in the IL2 Selector's settings really, and how do we know when we've gone too far?
Logged
I'm the clown walking around with a bucket over his head....
#### SAS~vampire_pilot
• SAS Team
• member
• Offline
• Posts: 5333
• The man of a million fixes
##### Re: B.A.T. V3.6 THE BLITZ + V3.6.2 PATCH
« Reply #134 on: June 05, 2019, 10:30:44 AM »
Quote
When we say "reworked the classes", what does that mean, roughly?
It means literally what it says. Take the classfile, clean it up java-wise, rewrite certain features, particularly loadout references, and put it back.
Mike did this.
But if you want to you can continue the discussion here:
https://www.sas1946.com/main/index.php/topic,61392.0.html
Logged
#### dietz
• Supporter
• member
• Offline
• Posts: 778
• " Keep watching the skies...."
##### Re: B.A.T. V3.6 THE BLITZ + V3.6.2 PATCH
« Reply #135 on: June 12, 2019, 05:39:01 PM »
Hey thanks for using my artwork to head this new update .I would have liked to have been asked , but I tell you what: you've done so much to improve this game I'll make it a tiny payback for all that effort!
Thanks so much for this easy to download & easy to install Blitz add-on! Terrific! ( Esp the new & improved He-177!)
www.jamesdietz.com
Logged
#### SAS~vampire_pilot
• SAS Team
• member
• Offline
• Posts: 5333
• The man of a million fixes
##### Re: B.A.T. V3.6 THE BLITZ + V3.6.2 PATCH
« Reply #136 on: June 12, 2019, 10:36:15 PM »
Quote
Hey thanks for using my artwork to head this new update .I would have liked to have been asked , but I tell you what: you've done so much to improve this game I'll make it a tiny payback for all that effort!
Hi
But you actually have been asked, you might have forgotten about it.
It's been some time ago when you posted some of your artworks and Mike asked if these could be used for BAT and you said sure
https://www.sas1946.com/main/index.php/topic,59732.msg658401.html#msg658401
Sorry for any misunderstandings however.
Logged
#### dietz
• Supporter
• member
• Offline
• Posts: 778
• " Keep watching the skies...."
##### Re: B.A.T. V3.6 THE BLITZ + V3.6.2 PATCH
« Reply #137 on: June 13, 2019, 12:03:20 PM »
No problemo- I wasn't being snarky...I really do mean that I am delighted with your work & me contributing in a small way!
Logged
#### Jimbo947
• member
• Offline
• Posts: 103
##### Re: B.A.T. V3.6 THE BLITZ + V3.6.2 PATCH
« Reply #138 on: June 13, 2019, 12:54:50 PM »
Just a comment on how brilliant smashing the Jave wall is..
On my friends Pc, a Dell Inspiron Desktop , Vintage 2007/2008 ( AMD Athlon II x4 8GB ram and a Nvidia Geforce 330 512mb
Previous all BAT Campaigns , except DOF,would run as a Boring PowerPoint , regardless of the settings ... 1 fps zzzz...2fps Yawn etc
Now, all WW2 and Jet age Campaigns, using High Settings only and without any tweaks, they all run run at an average of 34 fps, dependant on mission and sometimes, getting 60 fps with a very ocaissonal momentary frame freeze
.
Using QMB and Excellent settings, although with fewer aircraft and map dependant, he obtains 40 fps average.
Again, thanks for this brilliant work
Logged
#### vonofterdingen
• Missioneer
• member
• Offline
• Posts: 830
##### Re: B.A.T. V3.6 THE BLITZ + V3.6.2 PATCH
« Reply #139 on: June 13, 2019, 12:55:41 PM »
This artwork is really first-rate dietz, thanks.
Logged
#### David Prosser
• member
• Offline
• Posts: 3199
##### Re: B.A.T. V3.6 THE BLITZ + V3.6.2 PATCH
« Reply #140 on: July 05, 2019, 09:07:11 PM »
I've tried to re-install BAT BLITZ to my second copy of BAT, as WAW stopped working when I tried to install a new plane. Here is what the computer said. I've tried four times.
Code: [Select]
Checking File BLITZ_BAT01.zipBLITZ_BAT01.zip is okay!Checking File BLITZ_BAT02.zipBLITZ_BAT02.zip is okay!Checking File BLITZ_BAT03.zipBLITZ_BAT03.zip is damaged!Checksum should be: 0e419853Checksum is : 00000000Checking File BLITZ_BAT04.zipBLITZ_BAT04.zip is okay!Errors occured, aborting installation!Creating Installer Logfilecheers
David
#### danperin
• member
• Offline
• Posts: 263
##### Re: B.A.T. V3.6 THE BLITZ + V3.6.2 PATCH
« Reply #141 on: July 05, 2019, 09:25:58 PM »
Dietz, your artwork is superb...Just beautiful
Logged
#### SAS~Storebror
• Editor
• member
• Offline
• Posts: 19441
• Failure is not an option.
##### Re: B.A.T. V3.6 THE BLITZ + V3.6.2 PATCH
« Reply #142 on: July 05, 2019, 11:18:19 PM »
I've tried to re-install BAT BLITZ
(...)
Code: [Select]
Checking File BLITZ_BAT03.zipBLITZ_BAT03.zip is damaged!Checksum should be: 0e419853Checksum is : 00000000
Looks like either BLITZ_BAT03.zip is missing or empty.
Grab a fresh copy of BLITZ_BAT03.zip.
Mike
Logged
While love lives, while suns set, while snowmoons rise in november skies, while ties bind, while souls cry, while oceans ride in an endless tide, while love grows, while blood flows, while truth is real you need never feel alone.
In loving memory of times when I cared.
#### Barkhorn1x
• member
• Offline
• Posts: 372
##### Re: B.A.T. V3.6 THE BLITZ + V3.6.2 PATCH
« Reply #143 on: July 17, 2019, 08:08:07 AM »
Took a bit of a hiatus after 3.30 and come back to find...THIS...AND the DGEN patch. WOW!!! Great job and thanks to all the modders.
Question on the new GUI; What do the little red buttons on the right side of the QMB screens do?
(Figured it out - DUH!)
Thanks again.
Logged
Pages: 1 ... 9 10 11 [12] 13 Go Up
Page created in 0.015 seconds with 27 queries.
|
2020-07-02 19:38:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5622870922088623, "perplexity": 14233.583202897345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655879738.16/warc/CC-MAIN-20200702174127-20200702204127-00388.warc.gz"}
|
https://www.vedantu.com/question-answer/due-to-an-increase-in-the-price-of-sugar-by-25-class-11-maths-cbse-5f52f7eecf806237fb031ed1
|
Question
# Due to an increase in the price of sugar by 25%, by how much percent must a householder decrease the consumption of sugar as so that there is no increase in the expenditure of the sugar.
Hint: Assume the household consumption to be x and let it be decreased by y% so that net expenditure remains constant. Assume initial prices to be p. Now use the fact that net expenditure is constant to get a value for y.
Let the initial prices of sugar be p, the initial household consumption be x and let it be decreased by y% so that the net expenditure remains the same.
Initial expenditure $=prices\times consumption=px$
New prices = p + 25% of p = $p+\dfrac{25}{100}\times p=\text{ }p\text{ }+\text{ }0.25p\text{ }=\text{ }1.25p$
New consumption = x – y% of x $=x-\dfrac{y}{100}x=x-\dfrac{xy}{100}$
New expenditure = prices $\times$ consumption = $1.25p\times \left[ x-\dfrac{xy}{100} \right]=1.25px\left[ 1-\dfrac{y}{100} \right]$
Since new expenditure = initial expenditure we have $1.25px\left[ 1-\dfrac{y}{100} \right]=px$
Dividing both sides by px we get
$1.25\left[ 1-\dfrac{y}{100} \right]=1$
Dividing both sides by 1.25 we get
$1-\dfrac{y}{100}=\dfrac{1}{1.25}=\dfrac{100}{125}=\dfrac{4}{5}$
Subtracting 1 on both sides we get
$1-\dfrac{y}{100}-1=\dfrac{4}{5}-1$
$\Rightarrow -\dfrac{y}{100}=\dfrac{-1}{5}$
Multiplying both sides by -1 we get
$\dfrac{y}{100}=\dfrac{1}{5}$
Cross multiplying, we get
$5y=100$
Dividing both sides by 5, we get
$\dfrac{5y}{5}=\dfrac{100}{5}$
i.e. y = 20
Hence the consumption should be decreased by 20% so that there is no net change in the expenditure on sugar.
Note:This question can be solved directly using ratio and proportion.
We know that net expenditure $=prices\times consumption$
So that $prices\propto \dfrac{1}{consumption}$
$prices=\dfrac{k}{consumption}$
So we have $\dfrac{price{{s}_{f}}-price{{s}_{i}}}{price{{s}_{i}}}=0.25$ where $price{{s}_{i}}$ are the initial prices and $price{{s}_{f}}$ are the final prices.
Hence, we have
\begin{align} & \dfrac{\dfrac{k}{consumptio{{n}_{f}}}-\dfrac{k}{consumptio{{n}_{i}}}}{\dfrac{k}{consumptio{{n}_{i}}}}=\dfrac{25}{100} \\ & \Rightarrow \dfrac{consumptio{{n}_{i}}-consumptio{{n}_{f}}}{consumptio{{n}_{f}}}=\dfrac{25}{100} \\ \end{align}
Now we know that if $\dfrac{a}{b}=\dfrac{c}{d}$ then $\dfrac{a}{b+a}=\dfrac{c}{d+c}$
Using the above property, we get
\begin{align} & \dfrac{consumptio{{n}_{i}}-consumptio{{n}_{f}}}{consumptio{{n}_{f}}+\left( consumptio{{n}_{i}}-consumptio{{n}_{f}} \right)}=\dfrac{25}{100+25} \\ & \Rightarrow \dfrac{consumptio{{n}_{i}}-consumptio{{n}_{f}}}{consumptio{{n}_{i}}}=\dfrac{25}{125}=\dfrac{1}{5}=\dfrac{100}{5}\%=20\% \\ \end{align}
Hence the consumption should be decreased by 20%.
|
2020-10-29 13:57:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9978768229484558, "perplexity": 839.5533757303685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107904287.88/warc/CC-MAIN-20201029124628-20201029154628-00543.warc.gz"}
|
https://math.stackexchange.com/questions/3164148/expectations-of-ito-integral/3164152
|
# Expectations of Ito integral
I have the following stochastic process represented by $$S_t = 1+\int_0^t exp\left(\sigma B_2 - \sigma^2\frac{s}{2}\right)dB_s,$$ where $$B_t$$ is a standard Brownian motion.
I would like to compute its expectations. I thought I could do it in two ways: $$(1)$$ using the fact that expectation of Ito integral is $$0$$ and $$(2)$$ using direct computations. But when I do this, I get two different answers.
Clearly, I must be doing something wrong, but I wonder if someone could point the problem to me.
(Expectations of Ito integral) Since the integrand is square integrable for a given t, one can simply use the fact that Ito integral with respect to Brownian motion will be 0. Thus,
$$\mathbb{E}[S_t]=1$$
(Direct computations) $$\mathbb{E}[S_t]=1+ \mathbb{E}\left[\int_0^t exp\left(\sigma B_s - \sigma^2\frac{s}{2}\right)dB_s\right]$$ Since integrand is non-negative, I thought I could apply Tonelli's Theorem to change the order of integration. But then $$\mathbb{E}[S_t] = 1+ \int_0^t \mathbb{E} \left[exp\left(\sigma B_s - \sigma^2\frac{s}{2}\right)\right]dB_s \\ = 1+ \int_0^t \mathbb{E} \left[exp\left(\sigma B_s \right)\right] exp\left(-\sigma^2\frac{s}{2}\right)dB_s \\ = 1+ \int_0^t exp\left(\sigma^2 \frac{s}{2} \right) exp\left(-\sigma^2\frac{s}{2}\right)dB_s \\ =1 + \int_0^t 1 dB_s \\ =1 + B(s) \neq 1$$
So it seems that Tonelli's (and Fubini's) theorem do not apply here. Is this because the integration is wrt to Brownian motion? Is there a way to compute this expectation directly?
You cannot write $$E\int_0^{t} Y(s)dB_s$$ as $$\int_0^{t} (EY(s))dB_s$$. The left side is a number and the right side is a random variable.
|
2019-08-22 07:56:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9804695844650269, "perplexity": 96.49196182380918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316785.68/warc/CC-MAIN-20190822064205-20190822090205-00199.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/reshma-wishes-mix-two-types-food-p-q-such-way-that-vitamin-contents-mixture-contains-least-8-units-vitamin-11-units-vitamin-b-graphical-method-of-solving-linear-programming-problems_47708
|
# Reshma Wishes to Mix Two Types of Food P and Q in Such a Way that the Vitamin Contents of the Mixture Contains at Least 8 Units of Vitamin a and 11 Units of Vitamin B. - Mathematics
Sum
Reshma wishes to mix two types of food P and Q in such a way that the vitamin contents of the mixture contains at least 8 units of vitamin A and 11 units of vitamin B. Food P costs ₹60/kg and food Q costs ₹80/kg. Food P contains 3 units/kg of vitamin A and 5 units/kg of vitamin B while food Q contains 4 units/kg of vitamin A and 2 units/kg of vitamin B. Determine the minimum cost of the mixture.
#### Solution
Let x units of food P and units of food Q are mixed together to make the mixture.
The cost of food P is ₹60/kg and that of Q is ₹80/kg. So, x kg of food P and y kg of food Q will cost ₹(60x + 80y).
Since one kg of food P contains 3 units of vitamin A and one kg of food Q contains 4 units of vitamin A, therefore, x kg of food P and y kg of food Q will contain (3x + 4y) units of vitamin A. But, the mixture should contain atleast 8 units of vitamin A.
∴ 3x + 4≥ 8
Similarly, x kg of food P and y kg of food Q will contain (5x + 2y) units of vitamin B. But, the mixture should contain atleast 11 units of vitamin B.
∴ 5x + 2y ≥ 11
Thus, the given linear programming problem is
Minimise Z = 60x + 80y
subject to the constraints
3x + 4≥ 8
5x + 2y ≥ 11
x, y ≥ 0
The feasible region determined by the given constraints can be diagrammatically represented as,
The coordinates of the corner points of the feasible region are A $\left( \frac{8}{3}, 0 \right)$ , B $\left( 2, \frac{1}{2} \right)$ and C $\left( 0, \frac{11}{2} \right)$
The value of the objective function at these points are given in the following table.
Corner Point Z = 60x + 80y $\left( \frac{8}{3}, 0 \right)$ $60 \times \frac{8}{3} + 80 \times 0 = 160$ → Minimum $\left( 2, \frac{1}{2} \right)$ $60 \times 2 + 80 \times \frac{1}{2} = 160$ → Minimum $\left( 0, \frac{11}{2} \right)$ $60 \times 0 + 80 \times \frac{11}{2} = 440$
The smallest value of Z is 160 which is obtained at the points $\left( \frac{8}{3}, 0 \right)$ and $\left( 2, \frac{1}{2} \right)$
It can be verified that the open half-plane represented by 60x + 80y < 160 has no common points with the feasible region.
So, the minimum value of Z is 160. Hence, the minimum cost of the mixture is ₹160.
Concept: Graphical Method of Solving Linear Programming Problems
Is there an error in this question or solution?
#### APPEARS IN
RD Sharma Class 12 Maths
Chapter 30 Linear programming
Exercise 30.3 | Q 10 | Page 40
|
2022-05-17 00:38:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29906126856803894, "perplexity": 1029.9765067271699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515466.5/warc/CC-MAIN-20220516235937-20220517025937-00580.warc.gz"}
|
https://buboflash.eu/bubo5/show-dao2?d=1442032782604
|
Tags
#biochem
Question
At room temperature (which we shall take to be ~300 K), the value of the thermal energy is [...] kJ•mol −1 .
~2.5
Tags
#biochem
Question
At room temperature (which we shall take to be ~300 K), the value of the thermal energy is [...] kJ•mol −1 .
?
Tags
#biochem
Question
At room temperature (which we shall take to be ~300 K), the value of the thermal energy is [...] kJ•mol −1 .
~2.5
If you want to change selection, open original toplevel document below and click on "Move attachment"
#### Parent (intermediate) annotation
Open it
At room temperature (which we shall take to be ~300 K), the value of the thermal energy is ~2.5 kJ•mol −1 . Th is means that if an interaction between two atoms is stabilized by less than ~2.5 kJ•mol −1 , then this interaction is very easily disrupted by collisions at room tem-
#### Original toplevel document (pdf)
cannot see any pdfs
#### Summary
status measured difficulty not learned 37% [default] 0
No repetitions
|
2022-05-23 14:20:27
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8838661909103394, "perplexity": 3932.474648624073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558030.43/warc/CC-MAIN-20220523132100-20220523162100-00132.warc.gz"}
|
https://brilliant.org/100day/day25/
|
Day 25
# 25 of 100: Life, the Universe, and Everything
The positive number $\color{#D61F06}X$ is divisible by 42, and is composed of only 1s and 0s when written in base 10. What's the smallest number that $\color{#D61F06}X$ might be?
If you don't know where to begin, take a peek at the divisibility rules on this page.
|
2020-04-07 23:56:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2123643308877945, "perplexity": 178.95653861475807}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371806302.78/warc/CC-MAIN-20200407214925-20200408005425-00338.warc.gz"}
|
https://www.nature.com/articles/s41598-020-77047-5?error=cookies_not_supported&code=921e84df-097a-4634-a99f-7e62adfd9178
|
# Experimental protection of quantum coherence by using a phase-tunable image drive
## Abstract
The protection of quantum coherence is essential for building a practical quantum computer able to manipulate, store and read quantum information with a high degree of fidelity. Recently, it has been proposed to increase the operation time of a qubit by means of strong pulses to achieve a dynamical decoupling of the qubit from its environment. We propose and demonstrate a simple and highly efficient alternative route based on Floquet modes, which increases the Rabi decay time ($$T_R$$) in a number of materials with different spin Hamiltonians and environments. We demonstrate the regime $$T_R \approx T_1$$ with $$T_1$$ the relaxation time, thus providing a route for spin qubits and spin ensembles to be used in quantum information processing and storage.
## Introduction
In open quantum systems, coherence of spin qubits is limited by spin-spin interactions, spin diffusion, inhomogeneity of the static and microwave fields1 as well as charge noise2. An increase in coherence time is achieved by dynamically decoupling (DD) qubits from their surroundings using distinct Electron Spin Resonance (ESR) pulses3,4,5,6,7. However, such pulses have inherent imperfections and fluctuations, thus requiring their own layer of DD, resulting in a doubly dressed qubit. The technique of concatenated DD8,9 has been proposed for nitrogen vacancy (NV) centers up to the second order of dressing8,10,11,12. Here we demonstrate a pulse protocol based on Floquet modes which successfully increases the decoherence time, independently of qubit initial state, in a number of materials with different spin Hamiltonians and environments, such as low and high spin-orbit coupling for instance. Rather than focusing on decoupling from the bath by strong excitation, we use very weak pulses and alter the dynamics of the entire system. For short spin relaxation times accessible to our measurement setup (at around 40 K) one can do a direct comparison with the coherence time, and we demonstrate the regime $$T_R\approx T_1$$. In magnetic diluted systems $$T_1\gg T_2$$, e.g. $$T_1$$ of several ms in rare earth ions such as Y$$_2$$SiO$$_5$$:Er$$^{3+}$$13 and Y$$_2$$SiO$$_5$$:Yb$$^{3+}$$14 or $$^{28}$$Si:Bi with a tunable $$T_1$$ of thousand of seconds15. Our general method can thus lead to very long persistent Rabi oscillations, using a single circularly polarized image pulse.
The use of strong continuous microwave excitation has been proposed as a way to protect qubits16,17 although the quantum gates would need proper redesigning. In related studies, complex pulse design using an arbitrary waveform generator, proved essential in studying Floquet Raman transitions18,19 and quantum metric of a two-level system20 in nitrogen vacancy (NV) centers. It is worth noting that in the case of concatenated DD, the frequency of the second order ($$n=2$$) excitation has to match the Rabi frequency of the first excitation ($$n=1$$); also, the two excitations are linearly polarized and perpendicular to each other (the method extends to higher orders in n). Experimentally, the protocol quickly becomes complex and demanding in terms of pulse design and frequency stability, above the second order.
Our protocol uses two coherent microwave pulses: a main pulse drives the qubit Rabi precession while a low-power, circularly polarized (image) pulse continuously sustains the spin motion. The image drive has a frequency close to the main drive and its amplitude is 1-2 orders of magnitude smaller. In this way, a quantum gate could be driven by regular pulses, without the image pulse, while the time interval between gates could be filled with an integer number of Rabi nutations that use our protection protocol. Such scheme would protect the coherence of the qubit in-between usual quantum gates. We note that the initial phase difference between the two pulses allows to tune the spin dynamics by enhancing (or diminishing) the Floquet modes21 of its second dressing.
The protocol described here can impact quantum sensing of magnetic fields seen as a perturbation of a given Rabi oscillation. Its Fast Fourier Transform (FFT) has a width $$\sim T_R^{-1}$$ and the condition $$T_R\approx T_1$$ significantly improves the field resolution.
The technical implementation is simple and can be generalized to any type of qubit, such as superconducting circuits or spin systems. In this paper we focus on the experimental implementation and simply observe that numerical simulations based on Bloch model with $$T_2=2T_1$$ describe the final results very well.
## Results and discussion
The standard method to induce Rabi oscillations in a two-level system (TLS) is to apply an electro-magnetic pulse of frequency $$f_0$$ equal to the TLS level separation (resonance regime, $$g\mu _B H_0=hf_0$$, with $$\mu _B$$ the Bohr magneton, $$H_0$$ the static magnetic field and h the Planck constant ). The pulse will drive the spin population coherently between the two states. Experimentally, the drive is at a frequency $$f_0+\Delta$$ (where $$f_0$$ is the Larmor frequency and $$\Delta$$ is a small detuning away from the resonance condition) followed by read out pulses of frequency $$f_0$$ to record the state $$\left\langle S_z\right\rangle$$ (see Fig. 1a). The method introduced here makes use of two coherent microwave pulses (see Fig. 1b,c): the drive pulse at $$f_0+\Delta$$ of amplitude $$h_d$$ and length $$\tau _{Rabi}$$ creates quantum Rabi oscillations while the second one sustain them using a very low power image of the drive ($$h_i\ll h_d$$), operated at $$f_0-\Delta$$. In order to probe $$\left\langle S_z(\tau _{Rabi})\right\rangle$$ at the end of the Rabi sequence, we wait a time longer than $$T_2$$, such that $$\left\langle S_x\right\rangle \approx \left\langle S_y\right\rangle \approx 0$$, followed by $$\pi /2- \pi$$ pulses to create a Hahn echo of intensity proportional to $$\left\langle S_z\right\rangle$$ (Mn$$^{2+}$$ spins are readout without echo, as detailed in Sect. IIB of Supplementary Information). Figure 1b shows one way of coherently creating the drive and its image, by means of a mixer multiplying a pulse at frequency $$f_0$$ with an intermediate frequency (IF) cosine signal allowing to control the detuning $$\Delta$$, phase $$\phi$$ and the pulse length, shape and amplitude.
Rabi oscillations of three different types of paramagnetic systems—a rare earth ion (Gd$$^{3+}$$), a transition metal ion (Mn$$^{2+}$$) and a defect in diamond (P1)—are shown in Fig. 1d–f, respectively. Their detuned Rabi oscillations induced by the drive pulse only (red curves) are of similar frequency ($$\approx 20$$ MHz) and last for a small number of nutations ($$<20$$). The blue curve shows the Rabi oscillation when the image pulse is superimposed. The oscillations remain intense far beyond the decay time of the red curves and their number is dramatically increased. This effect has maximum impact when the frequency difference between the drive and the image pulses $$2\Delta$$ matches the Rabi frequency induced by the main drive, $$F_R$$. In addition of the very long coherence time, we observe a slow amplitude modulation, depending on the phase $$\phi$$ and attributed to Floquet modes, as explained below.
We study the new decay time of the Rabi oscillation under image pumping, by tuning the relaxation time via temperature control and by applying the longest drive pulse available to us (Fig. 2). The length of the drive pulse is limited by the pulse power amplifier of the setup, with a maximum pulse length of 15 $$\upmu$$s. At 40 K, the relaxation time of the spin system MgO:Mn$$^{2+}$$ ($$S=5/2$$) is also $$\approx 15\,\upmu$$s. The Rabi oscillation for the transition $$+\frac{1}{2}\leftrightarrow -\frac{1}{2}$$ is shown in Fig. 2. Guidelines showing the exponential decays due to $$T_2$$ (in green) and $$T_1$$ (in orange) measured by Hahn echo and inversion recovery respectively, are added as well. While the amplitude of the Floquet mode (slow amplitude modulation) decreases with $$\sim$$ $$T_2$$, the Rabi oscillations persists with a decay time $$\approx T_1$$.
### Protection of quantum coherence for different initial states
We analyzed our protocol for different initial states. For initial and final states along $$+X$$, $$+Y$$ and $$+Z$$ we observe that the protection protocol is effective for times up to the maximum amplifier gate length of 15 $$\upmu$$s.
#### Pulse sequence
The protocol proposed here was tested for different initial states and spin systems. In the following, the details of the experiment are given. The ground state of the spin is along $$+Z$$ axis and it is used as initial state. In Fig. 3, this is shown in orange and labeled “preparation”. The $$+Z$$ state is obtained by thermalization. States $$+X$$ and $$+Y$$ are obtained from $$+Z$$ using a hard $$\frac{\pi }{2}$$ pulse around the y or x axes, respectively, able to excite the whole spin ensemble (or spectroscopic line). In MgO:Mn$$^{2+}$$and P1, this is ensured by their narrow spectroscopic linewidths $$\Gamma$$, while in CaWO$$_4$$:Gd$$^{3+}$$, $$\Gamma$$ is of the same order of magnitude with the maximum excitation bandwidth. Therefore, some Gd$$^{3+}$$ spins of the spin packet might not be in a perfect $$+X$$ or $$+Y$$ state. However, we didn’t notice any particular effect in the final results for Gd. We note that by combining rotations around any of the x, y and z axes, we can prepare the initial state in any position. Once prepared, the spin state is subjected to coherence protected Rabi protocol or to the usual Rabi drive for an integer number of Rabi flops (top and bottom “burst” panels in Fig. 3, respectively). Thus, the final state is along the same direction as the initial one22. The echo-based measurement for an initial state $$+Z$$ is shown in the green panel (with $$\tau _{wait}\gg T_2$$). For initial states $$+X$$ and $$+Y$$ (blue panel), one have to wait a time $$\tau _{free}$$ in order to let the spin packet defocus before applying a $$\pi$$ pulse and detect the subsequent echo signal.
Throughout this article, the $$\pi /2$$ and $$\pi$$ pulse lengths are 14 and 28 ns, respectively. The readout waiting times $$\tau _{wait}$$ are approximately 5 $$\upmu$$s for P1 defects, 6 $$\upmu$$s for Mn$$^{2+}$$ and 10 $$\upmu$$s for Gd$$^{3+}$$.
#### P1 defects in diamond
In the case of P1 defects in diamond, we present measurements of the spin echo signal when the initial and final states are along $$+Z$$, with and without the protection protocol for different lengths of the Rabi pulse $$\tau _{burst}$$. Without image pulse (Fig. 4a) the signal is visible after a Rabi burst of 300 ns, but it is rapidly lost for times longer than 1 $$\upmu$$s. With the protection protocol in place (Fig. 4b), the signal is almost entirely conserved for times up to 15 $$\upmu$$s. This behavior shows a significant improvement over the CPMG method (see Supplementary Information 1, $$T_2=0.69$$ $$\upmu$$s). For this experiment the temperature was set to $$T=$$15 K and $$\tau _{free}=300$$ ns.
#### CaWO
$$_4$$:Gd$$^{3+}$$ For Gd$$^{3+}$$ spins, we measured the spin-echo signal after an integer number of Rabi flops for initial/final states along the $$+X$$, $$+Y$$ and $$+Z$$ axes. The Rabi oscillations are shown in Fig. 5a as the real and imaginary part of the recorded signal (blue and orange lines, respectively). One notes that for the initial state along $$+X$$ and $$+Y$$ the Rabi signal starts and ends at zero value, with the end time being indicated by the arrow in each inset. Figure 5b shows large echo signals after 10 $$\upmu$$s, much longer than $$T_{2CPMG}=4$$ $$\upmu$$s (see Supplementary Information 1), for each of the initial state preparation. Using a combination of rotations around the x,y and z axes, we can create any initial state for the spin and thus use the image pulse protection for any arbitrary state. The experimental conditions for these measurements are: temperature $$T=40$$ K and $$\tau _{free}=200$$ ns.
### Qubit dynamics
The qubit dynamics in the absence of a bath, is described by the spin Hamiltonian in the laboratory frame (see Supplementary Information 1):
\begin{aligned} {\mathcal {H}}= f_0S_z+2h_{d}S_x\sin (\omega _+t+\phi )+2h_{i}S_x\sin (\omega _-t-\phi -\theta ) \end{aligned}
(1)
where $$f_0$$ is the Larmor frequency caused by the static field, $$h_d$$ and $$h_i$$ are the microwave drive and image field, respectively, $$\frac{\omega _{+,-}}{2\pi }=f_0\pm \Delta$$, $$\phi$$ is a tunable phase (see Fig. 1) and $$\theta$$ is a small additional phase, possibly created by imperfections of the setup (as discussed in Supplementary Information 1). Variables $$f_0,h_{d,i}$$ and $$\Delta$$ are expressed in units of MHz with $$h_{d,i}\ll f_0,\omega _\pm$$. After using the rotating wave approximation (RWA) for a rotation with $$\omega _+$$, the Hamiltonian (1) becomes:
\begin{aligned} \mathcal H_{RF}&=-\Delta S_z+h_{d}(S_x\sin \phi -S_y\cos \phi )\nonumber \\&\quad -h_{i}[S_x\sin (4\pi \Delta t+\phi +\theta )+S_y\cos (4\pi \Delta t+\phi +\theta )]. \end{aligned}
(2)
When the image field $$h_i$$ is absent, the Eq. (2) has no explicit time dependence and the Rabi frequency is simply $$F_R=\sqrt{\Delta ^2+h_d^2}$$. When $$h_i$$ is present, the dynamics of $$\left\langle S_z\right\rangle$$ can be solved numerically, as it is shown in Fig. 6 for the case of CaWO$$_4$$:Gd$$^{3+}$$.
For a fixed power $$h_d$$ of the drive pulse, Rabi oscillations are measured as a function of the detuning $$\Delta$$. As shown in the contour plot of Fig. 6a, $$\left\langle S_z\right\rangle (t)$$ vanishes after few oscillations except when the condition $$F_R\sim 2\Delta$$ is met. At this Floquet resonance, $$\left\langle S_z\right\rangle$$ keeps oscillating for a very long time ($$>15\,\upmu$$s). Its Fast Fourier Transform (FFT) is presented in Fig. 6b. The free (unprotected) Rabi oscillations mode $$F_R$$ is rather weak and broad showing the large damping caused by the environment. However, when the mode crosses the frequency of the image pulse, indicated by the vertical white dashed line, the peak becomes intense and narrower, as the qubit protection from environment is activated. The condition $$F_R=\sqrt{\Delta ^2+h_d^2}=2\Delta$$ (or $$h_d=\sqrt{3}\Delta$$) gives the most efficient protection of the Rabi oscillation (see Supplementary Information 1).
The general condition is $$F_R=n\Delta ,n\in N$$ showing a comensurate motion of the qubit and $$h_i$$ on the Bloch sphere. In other words, the qubit and the image field share a synchronous dynamics generated by the torques of the two drives (see also Sec.III D of the Supplementary Information 1).
We can compare the experimental result to the model described by the Hamiltonian (2) which can be rewritten as in Eq. (S14) in Supplementary Information : $${\mathcal {H}}_{RF}=-\Delta S_z+S_+[h_{d}e^{-i(\phi -\pi /2)}+h_{i}e^{i(4\pi \Delta t+\phi +\theta +\pi /2)}]$$. When the “image” pulse is not applied, the Hamiltonian is time independent and the propagator is simply the matrix exponential of the Hamiltonian: $$U_p(t)=\exp (-i2\pi {\mathcal {H}}_{RF} t)$$. When the image pulse is present ($$h_i>0$$), the Hamiltonian becomes explicitly time-dependent. Although a second canonical transformation RWA could remove the time dependence if $$\Delta \gg h_i$$, it is importat to leave $$\Delta$$ as a free parameter since the methods works at resonance as well ($$\Delta =0$$). Thus, for the sake of generality, we solved numerically the explicit time-dependent differential equations using QuTIP23. The parameters used in the simulation have been measured independently: the microwave drive field $$h_d$$ has been calibrated using the frequency of Rabi oscillations at no detuning ($$\Delta =0$$), the image drive $$h_i$$ was measured by a spectrum analyzer directly connected to the output of the AWG ($$h_i/h_d\approx 0.12$$), relaxation ($$T_1$$) and decoherence ($$T_2$$) times were measured by inversion recovery and Carr-Purcell-Meiboom-Gill (CPMG) protocol, respectively (see Supplementary Information 1). We used QuTiP implementation of Lindblad’s master equation with $$S_-$$ as collapse operator which is equivalent to the phenomenological Bloch model for the case $$T_2=2T_1$$. Figure 6c shows the FFT of $$\left\langle S_z\right\rangle (t)$$ computed using the time evolution of $${\mathcal {H}}_{RF}$$. The Hamiltonian (2) describes very well the protection of the coherence by means of the image pulse. Note the existence of a Floquet mode at $$\Delta$$=7.5 MHz of frequency $$\sim 1$$ MHz, visible in both the experimental and theoretical contour plots of Fig. 6.
The Floquet mode appears as beatings of the Rabi frequency and is $$\phi$$-tunable. Similarly to the case of Gd$$^{3+}$$, the qubit protection and Floquet mode dynamics is obtained for the $$S=5/2$$ spin of MgO:Mn$$^{2+}$$, here measured under the experimental conditions of Fig. 2. Rabi oscillations and corresponding FFT spectra are shown in Fig. 7 for two values of the initial phase, $$\phi =0^\circ$$ (green) and $$\phi =45^\circ$$ (gold), while simulations are shown in black. The decay times are much larger than $$T_2$$ for both values (here $$T_1\approx 15$$ $$\upmu$$s, see Fig. 2); however, the dynamics is strikingly different. When the drive and image pulses have the same initial phase (one can consider the initial time in Eq. (2) as $$-\frac{\theta }{4\pi \Delta }$$ without loss of generality), the Rabi oscillations have maximum visibility, with almost no beatings (see Supplementary Information 1). At $$\phi =45^\circ$$, the spin torques generated by the $$h_d$$ and $$h_i$$ fields induce strong beatings or a Floquet mode creating two additional modes of the Rabi frequency. The left panel shows Rabi splittings equal to the Floquet frequency for $$\phi =45^\circ$$ and a single Rabi oscillation for $$\phi =0^\circ$$.
Experimentally, we can continuously vary the value of $$\phi$$ and analyze the frequency and intensity of the Floquet mode. As an example, a comparison between theory and experiment is shown in Fig. 8 for the case of CaWO$$_4$$:Gd$$^{3+}$$for $$\Delta =h_d/\sqrt{3}=34$$ MHz. For even and odd multiples of $$\pi /4$$, a single and a splitted Rabi mode is observed, respectively. The Rabi splitting is the Floquet mode and is constant as a function of $$\phi$$ but its intensity oscillates with a period of $$\pi /2$$. The effect is evident in simulations as well, since the terms in $$h_{d,i}$$ of $${\mathcal {H}}_{RF}$$ are along the same direction or orthogonal, for $$\phi =0^\circ$$ and 45$$^\circ$$ respectively.
In regards to decoherence sources, it is safe to assume that the main contribution comes from the spin bath made by nuclear spins surrounding the central spin, as well as other electronic spins located in its closed vicinity. Such scenario is the typical situation in spin systems operated at low enough temperatures to reduce the role of the phonon bath on $$T_2$$. The details of the entangled qubit-bath dynamics are outside the scope of the current study. We do observe that the final results are well described when only dissipation is the source of decoherence, leading to $$T_2=2T_1$$. This may indicate that the image pulse $$h_i$$ is able to control the dynamics and thus the decoherence of the spin bath.
The qubit rotation is thus tunable by using a pre-selected value of $$\phi$$, allowing to create complex rotations. With a decoherence time approaching spin lifetime $$T_1$$, the value of $$\phi$$ can be changed while qubit control is still ongoing. Our study demonstrates a sustained quantum coherence using a general protocol that can be readily implemented to any type of qubit. Our approach can be used in other detection schemes, such as sensitive spin detection using on-chip resonance techniques24,25,26.
While preparing our manuscript for resubmission following initial refereeing, a related protocol applied to NV centers has been published27.
## Methods
### Spectrometer setup
The measurements have been performed on a conventional pulse ESR spectrometer Bruker E680 equipped with an incoherent electron double resonance (ELDOR) bridge and a coherent arbitrary waveform generator (AWG) bridge. In the ELDOR bridge (Fig. 1a), the drive and the read out pulses come from two independent sources while with the AWG bridge (Fig.1b) all the pulses are generated using the same microwave source and thus they are all phase coherent. The drive frequency is generated by mixing the source $$f_0$$ (used as a local oscillator) with a low frequency and phase controllable signal $$IF(\Delta ,\phi )$$ through an in-phase quadrature (IQ) mixer. Ideally, the output of the mixer is monochromatic with the frequency $$f_0+\Delta$$. In reality, the output consists of a principal frequency $$f_0+\Delta$$ (the drive) and of lower amplitude images $$f_0+n\Delta$$ (see Supplementary Information 1 for more information). Since the effect of the image is the central part of this paper, we have characterized the AWG bridge using a spectrum analyzer, right before the power amplification stage. An example of spectrum is presented in Fig. S6 of the Supplementary Information. The power of the image $$f_0-\Delta$$ is lower by $$\approx -18$$ dB than $$f_0+\Delta$$. Consequently, an amplitude ratio of the MW magnetic fields $$h_i/h_d$$ around $$\sim$$0.12 is used in simulations.
### Pulse sequence
First, the system is set to be in resonance condition $$g\mu _B H_0=hf_0$$. The drive pulse of amplitude $$h_d$$, frequency $$f_0+\Delta$$ and length $$\tau _{Rabi}$$ induces Rabi oscillation in detuning regime. At the same moment the image pulse (generated through the IQ mixer) of amplitude $$h_i$$, frequency $$f_0-\Delta$$ and the same length $$\tau _{Rabi}$$ also irradiate the spins. In order to probe $$\left\langle S_z(\tau _{Rabi})\right\rangle$$ at the end of the Rabi sequence, we wait a time longer than $$T_2$$, such that $$\left\langle S_x\right\rangle \approx \left\langle S_y\right\rangle \approx 0$$, followed by $$\pi /2- \pi$$ pulses to create a Hahn echo of intensity proportional to $$\left\langle S_z\right\rangle$$.
### Spin systems
The methodology presented here is demonstrated on different spin systems: the nitrogen substitution in diamond P1 defect ($$S=1/2$$) (concentration :100 ppm), Mn$$^{2+}$$ impurities in MgO ($$S=5/2$$)28,29,30,31 with a concentration of 10 ppm and Gd$$^{3+}$$ impurities in CaWO$$_4$$ ($$S=7/2$$)24,32 with a concentration of 50 ppm. Despite the large Hilbert space of the Mn$$^{2+}$$ and Gd$$^{3+}$$ spin Hamiltonians, the orientation of the magnetic field and the frequency and power of the microwave excitation are chosen to avoid multiple level transitions and thus select only one resonance28. Therefore, the spin systems can be considered as effective two-level systems undergoing coherent Rabi rotations. The spin Hamiltonians, operating parameters (fields and frequencies) as well as characteristic $$T_{1,2}$$ times for these materials are given in the Supplementary Information 1.
## Data availability
Data sets generated and analyzed during the current study are available from the corresponding author on request.
## References
1. 1.
Chirolli, L. & Burkard, G. Decoherence in solid-state qubits. Adv. Phys. 57, 225–285 (2008).
2. 2.
Yoneda, J. et al. A quantum-dot spin qubit with coherence limited by charge noise and fidelity higher than 99.9%. Nat. Nanotechnol. 13, 102–106 (2018).
3. 3.
Viola, L. & Lloyd, S. Dynamical suppression of decoherence in two-state quantum systems. Phys. Rev. A 58, 2733–2744 (1998).
4. 4.
Viola, L., Knill, E. & Lloyd, S. Dynamical decoupling of open quantum systems. Phys. Rev. Lett. 82, 2417–2421 (1999).
5. 5.
Viola, L., Lloyd, S. & Knill, E. Universal control of decoupled quantum systems. Phys. Rev. Lett. 83, 4888–4891 (1999).
6. 6.
Uhrig, G. S. Keeping a quantum bit alive by optimized pi-pulse sequences. Phys. Rev. Lett. 98, 100504 (2007).
7. 7.
Khodjasteh, K. & Lidar, D. A. Fault-tolerant quantum dynamical decoupling. Phys. Rev. Lett. 95, 180501 (2005).
8. 8.
Cai, J.-M. et al. Robust dynamical decoupling with concatenated continuous driving. New J. Phys. 14, 113023 (2012).
9. 9.
Cohen, I., Aharon, N. & Retzker, A. Continuous dynamical decoupling utilizing time-dependent detuning. Fortschritte Phys. 65, 1600071 (2017).
10. 10.
Farfurnik, D. et al. Experimental realization of time-dependent phase-modulated continuous dynamical decoupling. Phys. Rev. A 96, 013850 (2017).
11. 11.
Teissier, J., Barfuss, A. & Maletinsky, P. Hybrid continuous dynamical decoupling: a photon–phonon doubly dressed spin. J. Opt. 19, 044003 (2017).
12. 12.
Rohr, S. et al. Synchronizing the dynamics of a single nitrogen vacancy spin qubit on a parametrically Coupled Radio-Frequency Field through Microwave Dressing. Phys. Rev. Lett. 112, 010502 (2014).
13. 13.
Welinski, S. et al. Electron spin coherence in optically excited states of rare-earth ions for microwave to optical quantum transducers. Phys. Rev. Lett.122, (2019).
14. 14.
Lim, H.-J., Welinski, S., Ferrier, A., Goldner, P. & Morton, J. J. L. Coherent spin dynamics of ytterbium ions in yttrium orthosilicate. Phys. Rev. B 97, 064409 (2018).
15. 15.
Bienfait, A. et al. Controlling spin relaxation with a cavity. Nature 531, 74–77 (2016).
16. 16.
Facchi, P., Lidar, D. A. & Pascazio, S. Unification of dynamical decoupling and the quantum Zeno effect. Phys. Rev. A 69, 032314 (2004).
17. 17.
Fanchini, F. F., Hornos, J. E. M. & Napolitano, R. . d. J. . Continuously decoupling single-qubit operations from a perturbing thermal bath of scalar bosons. Phys. Rev. A 75, 022329 (2007).
18. 18.
Shu, Z. et al. Observation of floquet Raman transition in a driven solid-state spin system. Phys. Rev. Lett. 121, 210501 (2018).
19. 19.
Saiko, A. P., Markevich, S. A. & Fedaruk, R. Multiphoton Raman transitions and Rabi oscillations in driven spin systems. Phys. Rev. A 98, 043814 (2018).
20. 20.
Yu, M. et al. Experimental measurement of the quantum geometric tensor using coupled qubits in diamond. Natl Sci Rev 7, 254–260 (2020).
21. 21.
Russomanno, A. & Santoro, G. E. Floquet resonances close to the adiabatic limit and the effect of dissipation. J. Stat. Mech: Theory Exp. 2017, 103104 (2017).
22. 22.
Morton, J., Tyryshkin, A., Brown, R. & Shankar, S. Solid-state quantum memory using the 31P nuclear spin. Nature 455, 1085–1088 (2008).
23. 23.
Johansson, J., Nation, P. & Nori, F. QuTiP 2: A Python framework for the dynamics of open quantum systems. Comput. Phys. Commun. 184, 1234–1240 (2013).
24. 24.
Yue, G. et al. Sensitive spin detection using an on-chip SQUID-waveguide resonator. Appl. Phys. Lett. 111, 202601 (2017).
25. 25.
Probst, S. et al. Inductive-detection electron-spin resonance spectroscopy with 65 spins/ Hz sensitivity. Appl. Phys. Lett. 111, 202604 (2017).
26. 26.
Toida, H. et al. Electron paramagnetic resonance spectroscopy using a single artificial atom. Commun. Phys. 2, 1–7 (2019).
27. 27.
Miao, K. C., Blanton, J. P., Anderson, C. P., Bourassa, A., Crook, A. L., Wolfowicz, G., Abe, H., Ohshima, T. & Awschalom, D. D. Universal coherence protection in a solid-state spin qubit. Science 369, 1493–1497.
28. 28.
Bertaina, S. et al. Multiphoton Coherent Manipulation in Large-Spin Qubits. Phys. Rev. Lett. 102, 50501–50504 (2009).
29. 29.
Bertaina, S., Groll, N., Chen, L. & Chiorescu, I. Tunable multiphoton Rabi oscillations in an electronic spin system. Phys. Rev. B 84, 134433 (2011).
30. 30.
Bertaina, S., Martens, M., Egels, M., Barakel, D. & Chiorescu, I. Resonant single-photon and multiphoton coherent transitions in a detuned regime. Phys. Rev. B 92, 024408 (2015).
31. 31.
Bertaina, S., Yue, G., Dutoit, C.-E. & Chiorescu, I. Forbidden coherent transfer observed between two realizations of quasiharmonic spin systems. Phys. Rev. B 96, 024428 (2017).
32. 32.
Bertaina, S., Shim, J. H., Gambarelli, S., Malkin, B. Z. & Barbara, B. Spin-Orbit Qubits of Rare-Earth-Metal Ions in Axially Symmetric Crystal Fields. Phys. Rev. Lett. 103, 226402 (2009).
## Acknowledgements
ESR measurements were supported by the CNRS research infrastructure RENARD (award number IR-RPE CNRS 3443). Partial support by the National Science Foundation Cooperative Agreement No. DMR-1644779 and the State of Florida is acknowledged.
## Author information
Authors
### Contributions
S.B. and I.C. designed the experiment and analyzed the data. Measurements were performed at Lille University by H.V. and S.B. S.B., I.C. and H.De R. provided the theoretical background. All authors contributed to the writing of the manuscript.
### Corresponding authors
Correspondence to S. Bertaina or I. Chiorescu.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
### Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Bertaina, S., Vezin, H., De Raedt, H. et al. Experimental protection of quantum coherence by using a phase-tunable image drive. Sci Rep 10, 21643 (2020). https://doi.org/10.1038/s41598-020-77047-5
• Accepted:
• Published:
|
2021-01-25 15:03:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7450310587882996, "perplexity": 1406.855710631295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703581888.64/warc/CC-MAIN-20210125123120-20210125153120-00057.warc.gz"}
|
https://www.clutchprep.com/chemistry/practice-problems/120902/a-0-50-m-solution-of-an-acid-ha-has-ph-2-24-what-is-the-value-of-ka-for-the-acid
|
# Problem: A 0.50 M solution of an acid HA has pH = 2.24. What is the value of Ka for the acid?
###### FREE Expert Solution
The dissociation of HNO2 is as follows:
HA(aq) + H2O(l) H3O+(aq) + A(aq)
From this, we can construct an ICE table. Remember that liquids are ignored in the ICE table and Ka expression.
The Ka expression for HA is:
$\overline{){{\mathbf{K}}}_{{\mathbf{a}}}{\mathbf{=}}\frac{\mathbf{products}}{\mathbf{reactants}}{\mathbf{=}}\frac{\mathbf{\left[}{\mathbf{H}}_{\mathbf{3}}{\mathbf{O}}^{\mathbf{+}}\mathbf{\right]}\mathbf{\left[}{\mathbf{A}}^{\mathbf{-}}\mathbf{\right]}}{\mathbf{\left[}\mathbf{HA}\mathbf{\right]}}}$
94% (42 ratings)
###### Problem Details
A 0.50 M solution of an acid HA has pH = 2.24. What is the value of Ka for the acid?
|
2021-04-11 12:27:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4255034327507019, "perplexity": 4500.328313005006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038062492.5/warc/CC-MAIN-20210411115126-20210411145126-00267.warc.gz"}
|
https://crossminds.ai/video/improved-optimistic-algorithms-for-logistic-bandits-606f45ab072e523d7b780b96/
|
Improved Optimistic Algorithms for Logistic Bandits
# Improved Optimistic Algorithms for Logistic Bandits
Jul 12, 2020
|
23 views
|
###### Details
The generalized linear bandit framework has attracted a lot of attention in recent years by extending the well-understood linear setting and allowing to model richer reward structures. It notably covers the logistic model, widely used when rewards are binary. For logistic bandits, the frequentist regret guarantees of existing algorithms are $\tilde{\mathcal{O}}(\kappa \sqrt{T})$, where $\kappa$ is a problem-dependent constant. Unfortunately, $\kappa$ can be arbitrarily large as it scales exponentially with the size of the decision set. This may lead to significantly loose regret bounds and poor empirical performance. In this work, we study the logistic bandit with a focus on the prohibitive dependencies introduced by $\kappa$. We propose a new optimistic algorithm based on a finer examination of the non-linearities of the reward function. We show that it enjoys a $\tilde{\mathcal{O}}(\sqrt{T})$ regret with no dependency in $\kappa$, but for a second order term. Our analysis is based on a new tail-inequality for self-normalized martingales, of independent interest. Speakers: Louis Faury, Marc Abeille, Clément Calauzènes, Olivier Fercoq
|
2022-05-24 17:35:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7744122743606567, "perplexity": 823.7673207559409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662573189.78/warc/CC-MAIN-20220524173011-20220524203011-00521.warc.gz"}
|
https://math.stackexchange.com/questions/1678246/integrating-int-fracx-cos-x1-sqrt2x3e-sin-xx2dx
|
# Integrating $\int \frac{x \cos x+1}{\sqrt{2x^3e^{\sin x}+x^2}}dx$
I came across a question today..
Integrate $\int \dfrac{x \cos x+1}{\sqrt{2x^3e^{\sin x}+x^2}}dx$
How to do it? I tried to take $x^2 e^{\sin x}$ out of the roots. But it didn't work out. I also tried to used substitution method by with the whole denominator but no result.
• I'd love to see how to solve such an antiderivative. WA has no idea, and neither do I, what to do. It looks evil. – DonAntonio Mar 1 '16 at 8:48
• Is it a problem for students? What course? – Yuriy S Mar 1 '16 at 8:49
• @YuriyS It's like high level high school problem.. :p – manshu Mar 1 '16 at 8:57
• Depends on the country, I guess. We had nothing like this in high-school – Yuriy S Mar 1 '16 at 9:18
## 1 Answer
Take $x e^{\sin x}$ =t/2 and you will get the answer. Multiply the numerator and the denominator by ${e^{\sin x}}$ and pull $x^2$ out of the root. $\int{dt\over t\sqrt{t+1}}$.Now substitute $t+1=y^{2}$.
• Excellent! It really works. +1 – DonAntonio Mar 1 '16 at 8:56
• Yes, I had my doubt's when reading this but now I've tried it, I'd say this was amazing. (+1) – Nikunj Mar 1 '16 at 9:12
|
2020-01-22 14:30:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8509750962257385, "perplexity": 619.2430907682684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607118.51/warc/CC-MAIN-20200122131612-20200122160612-00448.warc.gz"}
|
https://stacks.math.columbia.edu/tag/0B0W
|
[Chapter V, Serre_algebre_locale]
Lemma 43.19.4. Let $X$ be a nonsingular variety. Let $\mathcal{F}$ and $\mathcal{G}$ be coherent sheaves on $X$ with $\dim (\text{Supp}(\mathcal{F})) \leq r$, $\dim (\text{Supp}(\mathcal{G})) \leq s$, and $\dim (\text{Supp}(\mathcal{F}) \cap \text{Supp}(\mathcal{G}) ) \leq r + s - \dim X$. In this case $[\mathcal{F}]_ r$ and $[\mathcal{G}]_ s$ intersect properly and
$[\mathcal{F}]_ r \cdot [\mathcal{G}]_ s = \sum (-1)^ p [\text{Tor}_ p^{\mathcal{O}_ X}(\mathcal{F}, \mathcal{G})]_{r + s - \dim (X)}.$
Proof. The statement that $[\mathcal{F}]_ r$ and $[\mathcal{G}]_ s$ intersect properly is immediate. Since we are proving an equality of cycles we may work locally on $X$. (Observe that the formation of the intersection product of cycles, the formation of $\text{Tor}$-sheaves, and forming the cycle associated to a coherent sheaf, each commute with restriction to open subschemes.) Thus we may and do assume that $X$ is affine.
Denote
$RHS(\mathcal{F}, \mathcal{G}) = [\mathcal{F}]_ r \cdot [\mathcal{G}]_ s \quad \text{and}\quad LHS(\mathcal{F}, \mathcal{G}) = \sum (-1)^ p [\text{Tor}_ p^{\mathcal{O}_ X}(\mathcal{F}, \mathcal{G})]_{r + s - \dim (X)}$
Consider a short exact sequence
$0 \to \mathcal{F}_1 \to \mathcal{F}_2 \to \mathcal{F}_3 \to 0$
of coherent sheaves on $X$ with $\text{Supp}(\mathcal{F}_ i) \subset \text{Supp}(\mathcal{F})$, then both $LHS(\mathcal{F}_ i, \mathcal{G})$ and $RHS(\mathcal{F}_ i, \mathcal{G})$ are defined for $i = 1, 2, 3$ and we have
$RHS(\mathcal{F}_2, \mathcal{G}) = RHS(\mathcal{F}_1, \mathcal{G}) + RHS(\mathcal{F}_3, \mathcal{G})$
and similarly for LHS. Namely, the support condition guarantees that everything is defined, the short exact sequence and additivity of lengths gives
$[\mathcal{F}_2]_ r = [\mathcal{F}_1]_ r + [\mathcal{F}_3]_ r$
(Chow Homology, Lemma 42.10.4) which implies additivity for RHS. The long exact sequence of $\text{Tor}$s
$\ldots \to \text{Tor}_1(\mathcal{F}_3, \mathcal{G}) \to \text{Tor}_0(\mathcal{F}_1, \mathcal{G}) \to \text{Tor}_0(\mathcal{F}_2, \mathcal{G}) \to \text{Tor}_0(\mathcal{F}_3, \mathcal{G}) \to 0$
and additivity of lengths as before implies additivity for LHS.
By Algebra, Lemma 10.62.1 and the fact that $X$ is affine, we can find a filtration of $\mathcal{F}$ whose graded pieces are structure sheaves of closed subvarieties of $\text{Supp}(\mathcal{F})$. The additivity shown in the previous paragraph, implies that it suffices to prove $LHS = RHS$ with $\mathcal{F}$ replaced by $\mathcal{O}_ V$ where $V \subset \text{Supp}(\mathcal{F})$. By symmetry we can do the same for $\mathcal{G}$. This reduces us to proving that
$LHS(\mathcal{O}_ V, \mathcal{O}_ W) = RHS(\mathcal{O}_ V, \mathcal{O}_ W)$
where $W \subset \text{Supp}(\mathcal{G})$ is a closed subvariety. If $\dim (V) = r$ and $\dim (W) = s$, then this equality is the definition of $V \cdot W$. On the other hand, if $\dim (V) < r$ or $\dim (W) < s$, i.e., $[V]_ r = 0$ or $[W]_ s = 0$, then we have to prove that $RHS(\mathcal{O}_ V, \mathcal{O}_ W) = 0$ 1.
Let $Z \subset V \cap W$ be an irreducible component of dimension $r + s - \dim (X)$. This is the maximal dimension of a component and it suffices to show that the coefficient of $Z$ in $RHS$ is zero. Let $\xi \in Z$ be the generic point. Write $A = \mathcal{O}_{X, \xi }$, $B = \mathcal{O}_{X \times X, \Delta (\xi )}$, and $C = \mathcal{O}_{V \times W, \Delta (\xi )}$. By Lemma 43.19.1 we have
$\text{coeff of }Z\text{ in } RHS(\mathcal{O}_ V, \mathcal{O}_ W) = \sum (-1)^ i \text{length}_ B \text{Tor}_ i^ B(A, C)$
Since $\dim (V) < r$ or $\dim (W) < s$ we have $\dim (V \times W) < r + s$ which implies $\dim (C) < \dim (X)$ (small detail omitted). Moreover, the kernel $I$ of $B \to A$ is generated by a regular sequence of length $\dim (X)$ (Lemma 43.13.3). Hence vanishing by Lemma 43.16.2 because the Hilbert function of $C$ with respect to $I$ has degree $\dim (C) < n$ by Algebra, Proposition 10.60.9. $\square$
[1] The reader can see that this is not a triviality by taking $r = s = 1$ and $X$ a nonsingular surface and $V = W$ a closed point $x$ of $X$. In this case there are $3$ nonzero $\text{Tor}$s of lengths $1, 2, 1$ at $x$.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2023-01-31 23:22:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9760782718658447, "perplexity": 107.6739827042651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499891.42/warc/CC-MAIN-20230131222253-20230201012253-00784.warc.gz"}
|
https://imathworks.com/tex/tex-latex-greek-handwriting-font-with-math-and-optional-tikz-support/
|
# [Tex/LaTex] Greek handwriting font with math and (optional) TikZ support
fontsgreekmath-mode
I have been searching for several days for a handwriting font in LaTeX which is capable of:
• supporting greek language – either babel+inputenc or babel+utf8 or any possible combination;
• supporting math content;
• optionally, supporting Tikz plots etc.
However, I have not managed to find something – e.g. Google's hand.sty is not compatible with greek fonts (or I haven't found the way to make it work).
So, are there any suggestions?
P.S.: I have no problem to switch from (pdf)LaTeX to some other engine (e.g. LuaTex etc) in order to achieve this.
I don’t know of any handwritten font specifically designed for math, or legacy LaTeX encodings. However, the unicode-math project lets you mix and match any OpenType and TrueType fonts.
Here, I’ve kitbashed the letters from VAG Handwritten and filled in numbers and some math symbols from Tillana. Any symbols not in these fall back to GFS Neohellenic Math.
\documentclass{article}
\usepackage[math-style=upright]{unicode-math}
\defaultfontfeatures{Scale=MatchLowercase}
% Tillana is a free font by the Indian Type Foundry, available at:
% https://github.com/itfoundry/tillana/
% VAG Handwritten is a free font by VAG Design, available at:
% https://www.fontsquirrel.com/fonts/VAG-HandWritten
\setmainfont{VAG-HandWritten.otf}
\setmathfont{GFS Neohellenic Math}
\setmathfont[range={"03C0, "2013-"2014, "2018-"201A, "201C-"201E, "2021-"2022,
"2026, "2030, "2039-"203A, "2044, "20AC, "20BA, "20BD,
"2113, "2122, "2126, "212E, "2202, "2206, "220F, "2211,
"2212, "2215, "221A, "221E, "222B, "2246, "2260, "2264,
"2265, "25CA}
]{Tillana-Regular.ttf}
\setmathfont[range=bfup/{Latin,latin,num}
]{Tillana-Semibold.ttf}
\setmathfont[range={"00-"FF,
up/{Latin,latin,Greek,greek},
\increment}
]{VAG-HandWritten.otf}
\setmathfont[range=up/num
]{Tillana-Regular.ttf}
\begin{document}
$\lim_{t \to \infty} \frac{\partial}{\partial t} \int_0^{2 \muppi} \frac{t^2}{2} \mathop{\symup{d}t} \leq \sum_{i=1}^N \frac{\muppi i}{\sqrt 2} \approx \increment \symbfup{v}t$
$\frac{\sin \theta}{\Theta} = \frac{\sin \varphi}{\Phi} = \frac{\sin \gamma}{\Gamma}$
\end{document}
The widely-available handwritten font with the biggest repertoire of symbols is—you might want to sit down for this, don’t get mad at me, I’m just the messenger—Comic Sans. And some physicists do use it for their presentation slides. You might also have a look at Pecita. But you can sub in any handwritten Greek font of your choice with a command like
\setmathfont[range=up/{Latin,latin,Greek,greek,num}]{Some Font}
|
2022-10-03 22:00:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9026392102241516, "perplexity": 11921.815992186499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00029.warc.gz"}
|
https://www.pololu.com/category/151/ws281x-based-addressable-rgb-leds
|
## Products in category “WS281x-Based Addressable RGB LEDs”
Pololu item #: 2535
Quantity:
## Addressable Through-Hole 8mm RGB LED with Diffused Lens, WS2811 Driver (10-Pack)
This 10-pack of 8 mm through-hole RGB LEDs offers an easy way to add colorful and complex lighting effects to a project. Each diffused LED features an integrated WS2811 driver that operates at 5 V and is controlled through a high-speed one-wire digital interface that allows multiple LEDs to be connected together to form a chain of addressable RGB LEDs.
Pololu item #: 2546
Quantity:
## Addressable RGB 60-LED Strip, 5V, 2m (WS2812B)
This 2-meter long strip contains 60 RGB LEDs that can be individually addressed using a one-wire interface, allowing you full control over the color of each RGB LED. The flexible, waterproof strip runs on 5 V and can be chained with additional WS2812B strips to form longer runs or cut apart between each LED for shorter sections.
Pololu item #: 2548
Quantity:
## Addressable RGB 60-LED Strip, 5V, 1m (WS2812B)
This 1-meter long strip contains 60 RGB LEDs that can be individually addressed using a one-wire interface, allowing you full control over the color of each RGB LED. The flexible, waterproof strip runs on 5 V and can be chained with additional WS2812B strips to form longer runs or cut apart between each LED for shorter sections.
Pololu item #: 2550
Quantity:
## Addressable High-Density RGB 72-LED Strip, 5V, 0.5m (WS2812B)
This high-density, 0.5-meter long strip contains 72 RGB LEDs that can be individually addressed using a one-wire interface, allowing you full control over the color of each RGB LED. The flexible, waterproof strip runs on 5 V and can be chained with additional WS2812B strips to form longer runs or cut apart between each LED for shorter sections.
Pololu item #: 2772
Quantity:
## Adafruit 16 WS2812 LED NeoPixel Ring
This Adafruit NeoPixel ring contains 16 individually addressable RGB LEDs arranged in a closely spaced circle that can all be controlled with a single digital output of a microcontroller. The ring has a 1.75″ (44.5 mm) outer diameter, and several rings can be chained together, making them great for adding some shine to a small robot or wearable electronics project.
Pololu item #: 2538
Quantity:
## Adafruit 15 WS2812 LED NeoPixel 1/4-Ring
This Adafruit NeoPixel 1/4-ring contains 15 individually addressable RGB LEDs arranged in a quarter-circle that can all be controlled with a single digital output of a microcontroller. Four of these rings can be connected to form a 60-LED NeoPixel ring with an outer diameter of 6.2″ (158 mm). Several rings can be chained together, making them great for adding some shine to your next project.
## Related Categories
(702) 262-6648
Same-day shipping, worldwide
|
2015-05-24 09:09:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19979500770568848, "perplexity": 13571.740307906823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927863.72/warc/CC-MAIN-20150521113207-00229-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://tex.stackexchange.com/questions/linked/32701
|
23 views
### Multicolumn equations [duplicate]
I have two small equations that I want to put into two columns, to save space. From this: To this:
76k views
### Different column widths using multicol
Is it possible to specify different column widths in a multicol environment? For example, in a two column layout, can the left column be set to twice the width of the right column?
2k views
### Numbering a set of horizontally distributed equations
Having different horizontally distributed equations inside an align environment, only lines are numbered, but not individual equations. How can the left equations be numbered using subequations? ...
2k views
### Equations side-by-side, both numbers on the right
How can I typeset two equations side-by-side but get both equation numbers printed on the right? a = b c = d (1, 2) I need to reference both equations individually in the ...
11k views
### How to increase vertical spacing between minipage and text?
I was trying to include two equations side-by-side in a document, while retaining the labels following this answer. However the spacing between the minipage and the text is too small. How can I ...
2k views
### Equations lining up and labeling
I'm trying to line up multiple equations next to each other and have them all properly labeled. There is another question discussing this exact problem. Unfortunately, all the solutions proposed there ...
326 views
### multicol package causes underfull \hbox
I would like to have two (or three) equations in the same line, side-by-side. I tried to do it with the multicol package: \documentclass[a4paper]{article} \usepackage{multicol} \begin{document} \...
588 views
### Formating Maxwell's Equations
Hey y'all I am trying to format Maxwell's equations like the image below but without the box. I also want to be able to reference the labels, thanks!
212 views
### Two separate align in one direction in latex
I would like two write two different align with two different labels, but I want both of them begin in one direction. How can I fix this code in latex? \documentclass[12pt,titlepage,a4paper]{book}...
169 views
### Labelling subequations in the same line of the equation automatically
I would like to automatically be able to label and reference parts of equations. To explain what I mean I'll provide an example. \documentclass{article} \usepackage{amsmath,hyperref,cleveref} \begin{...
56 views
### LaTeX alligned numbered equations
I am trying to align two vector definitions next to each other. Both equations are (and should be) numbered. I found a similar (answered) question on Stack but my equations doesn't seem to align ...
72 views
### How to left align equation numbering within multicolum?
I have the following code: \documentclass[11pt, spanish]{article} %\usepackage{babel} \usepackage[latin1]{inputenc} \usepackage{geometry} \geometry{legalpaper, a4paper, %top=25mm, %bottom=25mm, ...
82 views
### Multiple tagged equations on a singly tagged line
I'm trying to build a way to have an arbitrary number of equations (typically two, but sometimes three, and if they're particularly short definitions even four) on a single line, but tagged only once ...
|
2022-01-19 21:38:50
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9491112232208252, "perplexity": 2484.3830578700486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301488.71/warc/CC-MAIN-20220119185232-20220119215232-00125.warc.gz"}
|
http://michaelnielsen.org/polymath1/index.php?title=Iterated_sumsets_of_log-primes
|
Iterated sumsets of log-primes
Let K be the set of logarithms of all primes in some interval [u,u+v] (it seems reasonable to take v comparable to u). If one can show that some iterated sumset of K intersects the logarithm of an interval [n,n+a], then (with a factoring algorithm) one would be able to find a prime in [u,u+v] in an amount a of work. This becomes non-trivial as soon as a is significantly larger than u.
We want u to be somewhat smaller than n, say n^{1/3} or n^{1/log log n}
Note that the Fourier coefficients of K are related to the Riemann Zeta function, and so one expects good bounds on the Riemann hypothesis. One can hope that other tools from additive combinatorics are useful, possibly exploiting the concavity of the logarithm function.
A weaker version of this strategy replaces K by L, the set of logarithms of all _integers_ in the interval [u,u+v]. This gives a weaker result (finding an integer with a factor in [u,u+v]) but the Fourier-analytic behaviour of L is presumably better (one only needs the Lindelhof hypothesis rather than Riemann).
K. Ford has a big paper in Annals (available at http://front.math.ucdavis.edu/0401.5223 ) which gives information about the asymptotic behavior of the number of integers up to x with at least one divisor in an interval [y,z], more or less for arbitrary x, y, z. He does state a result (Th. 2, page 6 of the arXiv PDF) with n restricted to an interval [x-\Delta,x], but \Delta must be quite large (something like n/\sqrt{\log n} in the setting of the previous comment); he does say that the range of \Delta can be improved, but to get a power of log seems hard. However, the techniques he develops could be useful.
An objection
There are only a integers in [n,n+a]. Each integer of size n has at most log n prime factors (log n/log log n, in fact), so there are only a log n primes that contribute to [n,n+a]. If one deletes these primes from K then the sumset of K will now completely miss the log of [n,n+a]. Thus we will not be able to proceed just by using average case information about K (cf. the generic prime discussion, or the oracle counterexample to finding pseudoprimes.
Analysis of K+K+K
Here we try a model in which u ~ S and n ~ S^3 for some S.
Let K be the logarithms of the primes between S and 2S. Thus this set consists of about S/\log S numbers in the interval [\log S, \log S+\log 2]. It's quite uniformly distributed in a Fourier sense (especially if one assumes the Riemann hypothesis).
Experience has shown that double sumsets K+K tend to be well behaved almost everywhere, but triple sumsets K+K+K and higher are well behaved everywhere. (Thus, for instance, the odd Goldbach conjecture is solved for all large odd numbers, but the even Goldbach conjecture is only known for almost all large even numbers, even assuming GRH.) So it seems reasonable to look at the triple sumset K+K+K, which is lying in the interval [3 \log S, 3 \log S + 3 \log 2].
Suppose we are looking to find a non-S-smooth number in time O( S^{0.99} ) (say). It would suffice to show that the interval [T, T + S^{-2.01}] contains an element of K+K+K for some fixed T in [3 \log S, 3 \log S + 3 \log 2], e.g. T = 3 \log S + \frac{1}{2} 3 \log 2.
On the one hand, this is quite a narrow interval to hit. On the other hand, K+K+K has about S^3/\log^3 S triples in it, so probabilistically one has quite a good chance of catching something. But, as always, the difficulty is to get a deterministic result which works even in the worst case scenario.
Hmm, the tininess of the interval [T,T+S^{-2.01}] is quite discouraging. Even if one considers the larger set K + L, where L is the log-integers (and which are very highly uniformly distributed), one can still miss this interval entirely. Undoing the logarithm, the point here is that an interval of the form [N, N+S^{0.99}] could manage, by a perverse conspiracy, to miss all multiples of p for every prime p between S and 2S.
Here is a back-of-the-envelope Fourier calculation which looks a bit discouraging. Suppose one wants to show that the interval [2N^3, 2N^3+N^{0.99}] contains an element of the triple product set [N,2N] \cdot [N,2N] \cdot [N,2N]. If we let \mu be counting measure on the log-integers \{ \log n: N \leq n \leq 2N \}, we are asking that \mu*\mu*\mu gives a non-zero weight to the the interval [\log 2N^3, \log 2N^3 + O( N^{-2.01} ) ].
We express this Fourier-analytically, basically as a Fourier integral of \hat \mu(\xi)^3 over an interval \xi = O(N^{2.01}), multiplied by the normalising factor of N^{-2.01}.
The main term will be coming from the low frequencies \xi=O(1), where \hat \mu(\xi) is about N; this gives the main term of about N^{0.99}, which is what one expects.
What about the error terms? Well, the Dirac spikes of \mu are distance about 1/N apart. For $N \ll |\xi| \ll N^{2.01}$, there’s no particular reason for any coherence in the Fourier sum in \hat \mu(\xi), and so I would expect the sum to behave randomly, i.e. \hat \mu(\xi) = O(\sqrt{N}) in this region. (In fact, RH basically would give this to us). This leads one to an error term of O( \sqrt{N}^3 \times N^{2.01} \times N^{-2.01} ) = O(N^{1.5} ), which is too large compared to the main term.
The situation does not seem to improve with various tweaking of parameters, though maybe I’m missing something.
Analysis of L+L
I can get a non-trivial result on L+L using the Weyl bound for the Gauss circle problem, or more precisely the variant of this circle problem for the hyperbola (essentially the Dirichlet divisor problem).
More precisely, let’s look at the product set [S,2S] \cdot [S,2S] \subset [S^2, 4S^2] in the middle of the interval [S^2,4S^2], say near 2S^2 (this is like considering L+L where L are the log-integers restricted to [\log S, \log S + \log 2]). It’s trivial that any interval of length S near 2S^2 will meet [S,2S] \cdot [S,2S]. I claim though that the same is true for intervals of size about S^{2/3}. The reason is that the number of products of the form [S,2S] \cdot [S,2S] less than a given number x is basically the number of lattice points in the square [S,2S] \times [S,2S] intersect the hyperbolic region \{ (a,b): ab < x \}. The Weyl technology for the Gauss circle problem (Poisson summation, etc.) gives an asymptotic for this with an error of O(S^{2/3}), which implies in particular that this count must increase whenever x increases by more than O(S^{2/3}). So every interval of this length must contain at least one number which factors as a product of two numbers in [S,2S].
Presumably some of the various small improvements to the Weyl bound for the circle problem over the years can be transferred to the hyperbola, allowing one to reduce the 2/3 exponent slightly.
Unfortunately the asymptotics become much much worse if we restrict the numbers in [S,2S] to be prime, so I doubt this gives anything particularly non-trivial for the original primality problem. Also the error term in these lattice point problems is never going to be better than S^{1/2}, so we once again butt our heads against this S^{1/2} barrier.
|
2019-01-18 16:45:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9827119708061218, "perplexity": 890.4284168406967}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660175.18/warc/CC-MAIN-20190118151716-20190118173716-00540.warc.gz"}
|
http://dml.cz/dmlcz/119687
|
Article
Full entry | PDF (0.2 MB)
Keywords:
nonlinear degenerate elliptic equations; existence; measure data
Summary:
In this paper we prove existence results for some nonlinear degenerate elliptic equations with data in the space of bounded Radon measures and we improve the results already obtained in Cirmi G.R., {\it On the existence of solutions to non-linear degenerate elliptic equations with measure data\/}, Ricerche Mat. {\bf 42} (1993), no. 2, 315--329.
References:
[1] Boccardo L., Gallouët T.: Nonlinear elliptic and parabolic equations involving measure data. J. Funct. Anal. 87 (1989), 149-169. MR 1025884
[2] Boccardo L., Gallouët T.: Nonlinear elliptic equations with right hand side measures. Comm. Partial Differential Equations 17 3-4 (1992), 641-655. MR 1163440
[3] Rakotoson J.M.: Quasilinear elliptic problems with measure as data. Differential Integral Equations 4 (1991), 449-457. MR 1097910
[4] Rakotoson J.M.: Generalized solutions in a new type of sets for problems with measures as data. Differential Integral Equations 6 1 (1993), 27-36. MR 1190163 | Zbl 0780.35047
[5] Bénilan Ph., Boccardo L., Gallouët T., Gariepy R., Pierre M., Vazquez J.L.: An $L^1$-theory of existence and uniqueness of solutions of non-linear elliptic equations. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 22 2 (1995), 241-273. MR 1354907
[6] Dal Maso G., Murat F., Orsina L., Prignet A.: Renormalized solutions of elliptic equations with general measure data. Ann. Scuola Norm. Sup. Pisa. Cl. Sci. (4) 28 4 (1999), 741-808. MR 1760541
[7] Cirmi G.R.: On the existence of solutions to non-linear degenerate elliptic equations with measure data. Ricerche Mat. 42 2 (1993), 315-329. MR 1283363
[8] Rakotoson J.M.: Resolution of the critical cases for problems with $L^1$ data. Asymptotic Analysis 6 (1993), 285-293. MR 1201197
[9] Heinonen J., Kilpeläinen T., Martio O.: Nonlinear Potential Theory of Degenerate Elliptic Equations. Oxford Mathematical Monographs, Clarendon Press, Oxford, 1993. MR 1207810
[10] Murthy M.K.V., Stampacchia G.: Boundary value problems for some degenerate elliptic operators. Ann. Mat. Pura Appl. 90 (1971), 1-122. MR 0249828 | Zbl 0226.35037
[11] Leray J., Lions J.L.: Quelques résultats de Višik sur les problèmes elliptiques nonlinéaires par les méthodes de Minty et Browder. Bull. Soc. Math. France 93 (1965), 97-107. MR 0194733
[12] Lions J.L.: Quelques méthodes de résolution des problèmes aux limites nonlinéaires. Dunod, Paris, 1969.
[13] Porretta A.: Some remarks on the regularity of solutions for a class of elliptic equations with measure data. Houston. J. Math. 26 1 (2000), 183-213. MR 1814734 | Zbl 0974.35032
[14] Li F.Q.: Nonlinear degenerate parabolic equations with measure data. Nonlinear Anal. 61 (2005), 1269-1282. MR 2131654 | Zbl 1073.35133
Partner of
|
2016-10-20 19:38:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8951463103294373, "perplexity": 2818.8484511756137}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717783.68/warc/CC-MAIN-20161020183837-00503-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://brilliant.org/problems/intersecting-intervals/
|
# Intersecting Intervals
Choose 2 numbers uniformly from $$[0,1]$$ and let them be the endpoints of the first interval.
Choose another 2 numbers uniformly from $$[0,1]$$ and let them be the endpoints of the second interval.
What is the probability that these two intervals intersect?
×
|
2017-05-22 15:48:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.962286651134491, "perplexity": 347.32850086662444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463605188.47/warc/CC-MAIN-20170522151715-20170522171715-00415.warc.gz"}
|
https://www.gamedev.net/forums/topic/333400-namespaces-in-classes/
|
# Namespaces in classes
## Recommended Posts
Jiia 592
This doesn't seem to be possible. But wouldn't it help a lot? I wish I could derive a world-like object from an engine-like class to recieve some functionality from it, but it would be nice to keep it's engine-level routines seperate from the object's worldly routines. Here's an example:
class ParticlePhysics
{
public:
namespace Particle
{
void UpdatePhysics();
void DieOut();
void StartFloating();
};
private:
.....
};
{
public:
void Update()
{
Particle::UpdatePhysics();
}
private:
...
};
Compare to something like this:
class ParticlePhysics
{
public:
void Particle_UpdatePhysics();
void Particle_DieOut();
void Particle_StartFloating();
private:
.....
};
{
public:
void Update()
{
Particle_UpdatePhysics();
}
private:
...
};
This is a really bad example. I have no idea why MadMan would want to be derived from a particle. But I find myself having this problem sometimes. Some classes are made to be derived from for this purpose. And I wish there was some way to prevent confusion. Is there another way? Thanks for any suggestions
##### Share on other sites
JohnBolton 1372
It's a bad idea. Basic rule of thumb: a class should do only one thing.
Besides, in your example, the namespace doesn't do anything except add redundant scope. Just do this instead:
class MadMan : public ParticlePhysics { public: void Update() { ParticlePhysics::UpdatePhysics(); } };
##### Share on other sites
Fruny 1658
If I remember properly, Objective-C lets you do things like that. C++ does not. Oh, well.
##### Share on other sites
Jiia 592
Quote:
Original post by JohnBoltonIt's a bad idea. Basic rule of thumb: a class should do only one thing.
It's not an idea, I just want to make things easier to organize and follow. I'm not sure I understand the relevance of your basic rule of thumb statement.
Quote:
Besides, in your example, the namespace doesn't do anything except add redundant scope.
Isn't that what all namespaces do? That's what I wanted it to do. Same reasons.
Quote:
That's pretty obvious, but it's more than just about calling the function name.
##### Share on other sites
MaulingMonkey 1728
Quote:
Original post by Jiia
Quote:
Original post by JohnBoltonIt's a bad idea. Basic rule of thumb: a class should do only one thing.
It's not an idea, I just want to make things easier to organize and follow. I'm not sure I understand the relevance of your basic rule of thumb statement.
He's critisizing the fact that your design makes you feel the need for such prefixes.
Quote:
Quote:
Besides, in your example, the namespace doesn't do anything except add redundant scope.
Isn't that what all namespaces do?
No, namespaces add scope/organization so things dont clash. This repeatedly, reiteratingly (that a word?) and redundantly adds scope/organization:
You can use the ClassName:: prefix to access that scope to prevent clashes
just like:
You can use the NamespaceName:: prefix to access that scope to prevent clashes
outside of a class.
There's no need to wrap all the members of ParticlePhysics in a namespace scope, because they're allready in a class scope.
Quote:
This is a really bad example... But I find myself having this problem sometimes.
##### Share on other sites
Jiia 592
Quote:
Original post by MaulingMonkeyHe's critisizing the fact that your design makes you feel the need for such prefixes.
Criticizing was not the help I was looking for, but I still don't see the relevance, or why we're thinking that bad design is lurking.
Quote:
There's no need to wrap all the members of ParticlePhysics in a namespace scope, because they're allready in a class scope....Why create a bad example when you allready have examples (your previous experiences)?
The method is exactly the same; I'm deriving from a class. The inherited class is very complex, and the base class is very simple. I use the IDE's tools to access and write code, and that means a lot of functions exist for this class that are all thrown together. Compiler auto-completion alone is enough reason to use prefixes. There are about five functions that deal with collision. It would be nice to provide a Collision:: prefix for them in the class. There are at least 15 that work with animations, 6 that update commands, 8 that update command tasks, 13 that handle equipment and armor, five or so that just divide normal updating code, around 12 for AI routines, and 3 for input.
When I type Character->, I get a table of contents. It's great, I would just like to organize it.
##### Share on other sites
JohnBolton 1372
Quote:
Original post by Jiia I'm not sure I understand the relevance of your basic rule of thumb statement.
I assume by wanting namespaces inside a class, you want to do something like this:
class A { namespace X { ... } namespace Y { ... } namespace Z { ... } };
This implies to me that the class has three separate behaviors or modes of functionality. Better would be to divide A into three separate classes, or make X, Y, and Z classes rather than namespaces.
##### Share on other sites
Jiia 592
Quote:
Original post by JohnBoltonThis implies to me that the class has three separate behaviors or modes of functionality. Better would be to divide A into three separate classes, or make X, Y, and Z classes rather than namespaces.
All three behaviors are exclusive to A. Wouldn't adding a new class for every routine that requires more than one function make a mess of things? The routines deal directly with the object. They are part of the object. I would have to pass this pointers to each class function just to have it function properly. Then there's the matter of private data, which the routines obviously need. I would end up making my whole project a huge library of one-use classes. That doesn't sound like the answer I'm looking for.
##### Share on other sites
illone 151
without a specific example of when this usage pattern would be
required/prefered, it may be hard for others to see why one would
want to use it. if you have a pre-existing example, sharing it
might help others see what you mean.
in your example, it seems like your Particle Physics class is managing
particle physics and particle persistence as well some display
functionality (though i may be misinterpreting what fade out is doing).
that would be 2 maybe 3 seperate seperate processes stuffed into one
class. and then the class being derived by 'Madman' helps little in
explaining how this usage pattern could be helpful.
illone
##### Share on other sites
Jiia 592
I have a large, complex class that is very flexible and does many different things. I want to organize all of the routines it can perform.
I'll find my own way. I appreciate the help.
##### Share on other sites
JohnBolton 1372
Instead of namespaces inside of a class, what about classes inside of a namespace?
namespace ParticlePhysics{ class Particle { public: void UpdatePhysics(); void UpdateFade(); void DieOut(); void StartFloating(); };}
##### Share on other sites
Nemesis2k2 1045
Quote:
I have a large, complex class that is very flexible and does many different things.
There's a word for that in Object Oriented Programming: It's called Low Cohesion, and it's a sign of a bad design. I know you didn't create this thread to get critical feedback on your design, but you should really look into this, and why it can be a bad thing.
##### Share on other sites
The primary intent of namespaces is to provide a mechanism for partioning class, struct, function, and other definitions to eliminate naming conflicts. Examples would be classes contained in the STL. Without namespaces you may end up with conflicts classes such as list, string, and exception. The syntax and more importantly how the compiler handles namespaces isn't very applicable inside of a class declaration. If parts of the language were changed to support this type of declarations the amount of resulting ambiguity in the use of namespaces would in all likelyhood degrade it's usefulness.
From what I can tell the only real reason you are wanting to do this is because of the auto completion feature in the IDE you are using (no it's NOT part of the 'compiler'). This is really the wrong reason for even considering doing such a thing. The suggestions and/or criticism concerning how your classes are design should be considered as constructive and good advice - especially since they are coming from a large group of VERY experienced individuals. Considering that you are working with a very large class that includes a wide set of complex operations you really should consider breaking it down into multiple classes containing sets of specific operations.
##### Share on other sites
Jiia 592
Perhaps the reason it's considered bad design is lack of organizing code tools? I have no problem with critical feedback, but the only advice I've seen so far was to use classes to seperate tasks that my object can perform. That does provide the grouping of that single routine type, but at a messy cost. In the end, I have twenty really tiny classes that are exclusive in their use to characters and share tons of data with each other. I fail to see how this is an improvement.
Sure, I can build a class that controls the character's eyes to look around, and a class to enable or disable control systems, and a class to handle path node updates. Also a class for moving equipment into and out of hands, inventory and the invironment. Another class to handle equipping suits, checking suit compatibility, updating suit dependencies, and updating their effect on the character (note that I already have a suit class, but now I'll have a CharacterSuitManagement class). Then my characters have something called "action events". These are ways that characters can interact with the world. For example, they could walk into an area trigger and an action event is presented to them for them to make a choice. There are three functions dealing with and managing action events. I already have an ActionEvent type class, but now I'll have CharacterActionEventManagement class. You might say that a three function routine isn't enough of a reason to want change, but you soon change your mind when there are 20 routines that all have three functions. There are several routines for standing, about five for moving, three or four for climbing on ledges, only two for aiming ranged weapons, and a few for updating directions - movement, facing, and looking are all seperate states that are updated individually.
Almost every task in this class can be grouped by it's purpose, but it doesn't fit into another class. It doesn't make sense as one. If I'm mistaken, please enlighten me.
##### Share on other sites
Jiia 592
Quote:
Original post by Helter SkelterFrom what I can tell the only real reason you are wanting to do this is because of the auto completion feature in the IDE you are using (no it's NOT part of the 'compiler').
Forgive my IDE/Compiler wording mixup, that was very inexcusable of me.
##### Share on other sites
Quote:
Original post by JiiaPerhaps the reason it's considered bad design is lack of organizing code tools?
That's like blaming a word processor for bad grammar. Design has nothing to do with languages, tools, or even specific compilers. Even low level designs have an abstraction above their actual implementation.
Quote:
I have no problem with critical feedback, but the only advice I've seen so far was to use classes to seperate tasks that my object can perform.
You've only provided a minimal amount of information. In deciding which functionality should be moved to a different class you really have to decided where that functionality really belongs. Take a file parser for instance. I may create a class that reads from a file, turns each symbol into a token, and then acts on specific sequences of tokens. If I ever want to change that class to parse text that is already loaded in memory I either replace the file reader OR include additional code to handle in-memory specific parsing.
Alternatively I can take each group of functionality and place them in separate classes to make it more extensible. In doing this I may create a stream class (called FileStream) to handle input, a single class that reads from the stream and converts the input into tokens (called Tokenizer), and a class which acts on token stream do to specific operations (called Parser). Now if I want to add support to parse information already in memory I create a new stream type class (called MemoryStream) which gets passed to the tokenizer. In short I can add, replace, or change existing components without breaking the others or having to modify them to support the changes.
You just need to look at your overall design, separate functionality into logical blocks, then decide whether inheritence or composition is more applicable to how everything works together.
Quote:
Almost every task in this class can be grouped by it's purpose, but it doesn't fit into another class. It doesn't make sense as one. If I'm mistaken, please enlighten me.
Maybe it does maybe it doesn't. That's the entire reason posters have suggested you revisit your design and decide if chances are necessary. Design is not simple nor is every design decision obvious. I'm sure that if you posted specific examples (i.e. your class definitions) with sufficient information about dependencies, use, and relationship others on the forum will be more than happy to make suggestions and possibly even provide a step by step blow of why they make certain decisions.
##### Share on other sites
Jiia 592
Quote:
Original post by Helter Skelter
Quote:
Original post by JiiaPerhaps the reason it's considered bad design is lack of organizing code tools?
That's like blaming a word processor for bad grammar.
It's more like blaming the height restriction of my skyscraper on the building materials. (edit: By code-tools, I was referring to tools in code, not tools to work with code. IE, language features)
Quote:
You've only provided a minimal amount of information.
How much information do you need? There's a lot of functions in a class, and that gets confusing. How difficult is that to follow? If there were a way to group functions by purpose, it would be much more manageable. Or at least I would think so.
Quote:
In deciding which functionality should be moved to a different class you really have to decided where that functionality really belongs.
It belongs where it's at. That's why I put it there. I have no grudge against writing new classes, but it makes no sense in most of these situations.
Quote:
Alternatively I can take each group of functionality and place them in separate classes to make it more extensible. In doing this I may create a stream class (called FileStream) to handle input, a single class that reads from the stream and converts the input into tokens (called Tokenizer), and a class which acts on token stream do to specific operations (called Parser).
Let's look at it from another angle. Lets say your parser includes several string management functions. Just three or four to compare, convert, search, or whatever. Then say it has a few error checking routines to see if things are going bad with the stream. Multiply this type of growth by twenty, and you have what I have. And it's still growing rapidly. You're telling me to create a class to search, manage, and convert parser strings from it's stream, even though the class will contain no data and only has three functions. The stream would also need sent to the class for each function.
Quote:
Design is not simple nor is every design decision obvious. I'm sure that if you posted specific examples (i.e. your class definitions) with sufficient information about dependencies, use, and relationship others on the forum will be more than happy to make suggestions and possibly even provide a step by step blow of why they make certain decisions.
I don't have time to edit the source enough to be decent for posting. My IDE font is very small, so my comments run really far to the right. That means the source box will explode and you'll have a 5000 pixel wide thread page. It really is a huge class, hence the reason I'm looking to improve it's managment. The class declaration itself is about 200 lines of code.
Like I said, I appreciate the help. I'll find my own way around it.
##### Share on other sites
Nitage 1107
Quote:
How much information do you need? There's a lot of functions in a class, and that gets confusing. How difficult is that to follow? If there were a way to group functions by purpose, it would be much more manageable. Or at least I would think so.
The people in this thread have been trying to tell you that your design is at fault here, not the language.
It turns out that there is a construct for grouping functions by purpose that is very managable in this scenario - they're called classes.
Quote:
I have a large, complex class that is very flexible and does many different things.
You created a large class with multiple responsibilities. Why don't you move the functionality of this class into several other classes and use your ParticlePhysics class as a facade (that's basically an interface class)?
##### Share on other sites
ajones 432
Sorry if I'm being dumb (I don't use inner structs very often), but can't you use inner structs instead of namespaces to do what you want? I know there are some (compiler related) problems when determining scope rules for inner structs (e.g. in the following example some compilers won't let ParticlePhysics::Particle::UpdatePhysics() access or modify a ParticlePhysics::m_doo). Try it out and see what you get:
class ParticlePhysics{public: // ...protected: struct Particle { // doesn't modify state static void UpdatePhysics(); // ... }; struct Whatever { // modifies state void Bar(); // ... private: int m_choo; } whatever; // note: Whatever has state - requires instanceprivate: int m_doo;};class MadBugger : public ParticlePhysics{public: void Update() { // static method Particle::UpdatePhysics(); // ... // instance method whatever.Bar(); }};
Sometimes better code readability and usability require 'unpopular' methods to get the job done; remember that everyone here is only trying to help (in a very general way), albeit without the wider understanding of the scope, overall design, or implications of what you have presented here. Good luck! [smile]
Edit: improved example to show inner struct semantics with and without state, and need to have instance of ParticlePhysics::Whatever; also made inner structs protected
[Edited by - ajones on July 21, 2005 3:40:42 AM]
##### Share on other sites
Quote:
Original post by JiiaBy code-tools, I was referring to tools in code, not tools to work with code. IE, language features.
Regardless, programming languages have nothing to do with bad design.
Quote:
If there were a way to group functions by purpose, it would be much more manageable. Or at least I would think so.
Ummmm...that's what classes are for.
Quote:
Let's look at it from another angle. Lets say your parser includes several string management functions. Just three or four to compare, convert, search, or whatever.
that depends. If the string functions can be considered general purpose they will most likely end up in a utility class, string class, or in their own little section of the program. Support for strings is broad enough that they in all likelyhook belong elsewhere.
Quote:
Then say it has a few error checking routines to see if things are going bad with the stream.
That functionality belongs in the appropriate stream class. All the parser cares about is if data is available or the end of input has been encountered.
Quote:
Like I said, I appreciate the help. I'll find my own way around it.
Good luck.
##### Share on other sites
MaulingMonkey 1728
Quote:
Original post by ajonesSorry if I'm being dumb (I don't use inner structs very often), but can't you use inner structs instead of namespaces to do what you want?
/* * I believe he wants to do something like this: */class foo { int member_variable; namespace bar { void function() { member_variable = 3; } }};/* * You cannot directly emulate this in C++: */class foo { int member_variable; class bar { void function() { /* * error: foo::bar is instantiatable completely independantly from a * foo object - which foo's member_variable do we use? */ member_variable = 3; } };};/* * You can work around this by mantaining a reference manually... something * similar to this is an option: */class foo { int member_variable; class bar_impl { foo & parent; bar_impl ( foo & parent ) : parent( parent ) {} void function() { parent.member_variable = 3; } } bar; foo () : bar( *this ) {}};/* There is some added overhead (one reference per type/subgroup) using this * method. Considering the only good argument for class sub-namespaces seems to * be related to intellesense, I might recommend using non-member functions in * conjunction: */class pie {public: void foo_eat( void ) {} void foo_make( int arg ) {} void foo_throw_it( void ) {} void bar_eat( void ) {} void bar_make( int arg ) {} void bar_throw_it( void ) {}};namespace foo { void eat( pie & self ) { self.foo_eat(); } void make( pie & self , int arg ) { self.foo_make( arg ); } void throw_it( pie & self ) { self.foo_throw_it(); }}namespace bar { void eat( pie & self ) { self.bar_eat(); } void make( pie & self , int arg ) { self.bar_make( arg ); } void throw_it( pie & self ) { self.bar_throw_it(); }}pie my_pie;//equivilant lines:my_pie.foo_eat();foo::eat(my_pie);//equivilant lines:my_pie.bar_make(3);bar::make(my_pie,3);...
##### Share on other sites
ajones 432
Quote:
Original post by MaulingMonkeyI believe he wants to do something like this:/* assignment to outer class member variable from inside inner class*/You cannot directly emulate this in C++
If so, there's a directly related discussion in the C++ Users Journal (you can also find the article here) that provides examples of how to work around the problem (although they're ugly); it also references a standard revision that should solve this language defect (access rules for nested classes).
Note that this is only relevant if Jiia does intend to mutate an outer member variable from an inner class; as long as Jiia only requires access from outer to inner scope (i.e. MadMan uses Particle:: members, but Particle:: doesn't make use of ParticlePhysics:: members), then an inner class is fine.
##### Share on other sites
Quote:
Original post by MaulingMonkey
Quote:
Original post by ajonesSorry if I'm being dumb (I don't use inner structs very often), but can't you use inner structs instead of namespaces to do what you want?
*** Source Snippet Removed ***
Actually when this thread was started I checking by doing something like that. About the closest I came to was creating an inner class that contained a reference to the outer class and overloaded the -> operator in the inner class to return a pointer to the outer class.
class Outer { void DoSomething(); class Inner { public: Outer &outer; Outer *operator->() { return &outer; } };public: Inner Particle;};Outer outer;outer.Particle->DoSomething();
In the end it's ugly and pointless.
##### Share on other sites
Nemesis2k2 1045
Am I the only one that thinks if an inner class needs knowledge of the containing class, it shouldn't be an inner class at all?
##### Share on other sites
Guest Anonymous Poster
Quote:
Original post by Helter SkelterIn the end it's ugly and pointless.
From my current (PhD) project's resource system (apologies for the huge source snippet - I just want to present a complete example):
class CfgElement{public: explicit CfgElement(ASSET_SYSTEM::AssetDataIterator& in, RESOURCE_SYSTEM::qualified_elements& qualifiedElements, RESOURCE_SYSTEM::qualified_attributes& qualifiedAttributes, const std::string& parentQualifiedName); inline const std::string getName() const { return m_name; } inline const std::string getQualifiedName() const { return m_qualifiedName; }public: struct elements_collection { public: typedef CFG_ELEMENTS::named_elements named_elements; typedef CFG_ELEMENTS::named_iterator named_iterator; typedef CFG_ELEMENTS::named_range named_range; typedef CFG_ELEMENTS::ordered_elements ordered_elements; typedef CFG_ELEMENTS::ordered_iterator ordered_iterator; typedef CFG_ELEMENTS::ordered_range ordered_range; public: const named_range get(const std::string& name) const; const named_iterator begin_named() const; const named_iterator end_named() const; const ordered_iterator get(const size_t& index) const; const ordered_iterator begin_ordered() const; const ordered_iterator end_ordered() const; private: friend class CfgElement; elements_collection(); named_elements m_named; ordered_elements m_ordered; }elements; struct attributes_collection { public: typedef CFG_ATTRIBUTES::named_attributes named_attributes; typedef CFG_ATTRIBUTES::named_iterator named_iterator; typedef CFG_ATTRIBUTES::named_range named_range; typedef CFG_ATTRIBUTES::ordered_attributes ordered_attributes; typedef CFG_ATTRIBUTES::ordered_iterator ordered_iterator; typedef CFG_ATTRIBUTES::ordered_range ordered_range; public: const named_iterator get(const std::string& name) const; const named_iterator begin_named() const; const named_iterator end_named() const; const ordered_iterator get(const size_t& index) const; const ordered_iterator begin_ordered() const; const ordered_iterator end_ordered() const; private: friend class CfgElement; attributes_collection(); named_attributes m_named; ordered_attributes m_ordered; }attributes;private: std::string m_name; std::string m_qualifiedName;};
I still maintain that inner structs are a valid solution to the OP's problem - the above code compiles fine under VC++ .NET 2003, where the body of CfgElement's constructor is actually responsible for inserting both elements and attributes into the respective collections' containers. Neither element_collection not attribute_collection make any use of the outer (CfgElement) class. CfgElement's interface, given its context and suitable documentation, is clean and easily understood, despite the use of two nested structs to group semantics.
I'm sorry if I'm being a bit anal about this [razz] - I just see it as a potential solution to the OP's problem; the use of inner structs can be ugly if you need inner -> outer access (I'm certainly not arguing that point), but the OP has yet to confirm this.
|
2017-08-19 11:23:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21507121622562408, "perplexity": 2603.5485091134556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105341.69/warc/CC-MAIN-20170819105009-20170819125009-00207.warc.gz"}
|
https://math.stackexchange.com/questions/3070332/can-erd%C5%91s-tur%C3%A1n-frac58-theorem-be-generalised-that-way
|
# Can Erdős-Turán $\frac{5}{8}$ theorem be generalised that way?
Suppose for an arbitrary group word $$w$$ ower the alphabet of $$n$$ symbols $$\mathfrak{U_w}$$ is a variety of all groups $$G$$, that satisfy an identity $$\forall a_1, … , a_n \in G$$ $$w(a_1, … , a_n) = e$$. Is it true, that for any group word $$w$$ there exists a positive real number $$\epsilon (w) > 0$$, such that any finite group $$G$$ is in $$\mathfrak{U_w}$$ iff $$\frac{\lvert\{(a_1, … , a_n) \in G^n : w(a_1, … , a_n) = e\}\rvert}{{|G|}^n} > 1 - \epsilon(w)?$$
How did this question arise? There is a widely known theorem proved by P. Erdős and P. Turán that states:
A finite group $$G$$ is abelian iff $$\frac{|\{(a, b) \in G^2 : [a, b] = e\}|}{{|G|}^2} > \frac{5}{8}.$$
This theorem can be rephrased using aforementioned terminology as $$\epsilon([a, b]) = \frac{3}{8}$$.
There also is a generalisation of this theorem, stating that a finite group $$G$$ is nilpotent of class $$n$$ iff $$\frac{|\{(a_0, a_1, … , a_n) \in G^{n + 1} : [ … [[a_0, a_1], a_2]… a_n] = e\}|}{{|G|}^{n + 1}} > 1 - \frac{3}{2^{n + 2}},$$ thus making $$\epsilon([ … [[a_0, a_1], a_2]… a_n]) = \frac{3}{2^{n + 2}}$$.
However, I have never seen similar statements about other one-word varieties being proved or disproved, despite such question seeming quite natural . . .
Actually, I doubt that the conjecture in the main part of question is true. However, I failed to find any counterexamples myself.
• I one wrote out an answer which is about generalising the Erdos-Turan result to infinite groups: math.stackexchange.com/a/2809964/10513 You might find it interesting/relevant. Jan 18 '19 at 11:54
• Not mentioned in a 2015 survey Farrokhi, D. G. (2015). ON THE PROBABILITY THAT A GROUP SATISFIES A LAW: A SURVEY (Research on finite groups and their representations, vertex operator algebras, and algebraic combinatorics), muroran-it.ac.jp/mathsci/danwakai/past/articles/201404-201503/…. Mentioned as open in a note by John D. Dixon, "Probabilistic Group Theory", people.math.carleton.ca/~jdixon/Prgrpth.pdf
– Dap
Jan 18 '19 at 18:00
• Is the $n=1$ case obviously true? Or is even that case difficult? Jan 23 '19 at 15:48
• @MeesdeVries, for the case $n = 1$ only three results are currently known: $\epsilon(x) = \frac{1}{2}$, $\epsilon(x^2) = \frac{1}{4}$ and $\epsilon(x^3) = \frac{2}{9}$. Aug 2 '19 at 11:19
• @MeesdeVries, there is also a conjecture, that $\epsilon(x^p) = \frac{p-1}{p^2}$ for prime $p$, however it remains unproven. Aug 2 '19 at 12:02
|
2022-01-20 15:36:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8071988821029663, "perplexity": 397.60953181794815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301863.7/warc/CC-MAIN-20220120130236-20220120160236-00148.warc.gz"}
|
https://jordanbell.info/euler/euler-algebra-I-III-09.html
|
### Part I. Section III. Chapter 9. “Observations on the Rules of Proportion and their Utility.”
477 This theory is so useful in the common occurrences of life, that scarcely any person can do without it. There is always a proportion between prices and commodities; and when different kinds of money are the subject of exchange, the whole consists in determining their mutual relations. The examples furnished by these reflections will be very proper for illustrating the principles of proportion, and showing their utility by the application of them.
478 If we wished to know, for example, the relation between two kinds of money; suppose a Louis d’or and a ducat: we must first know the value of those pieces when compared with others of the same kind. Thus, a Louis d’or being, at Berlin, worth 5 Reichsthaler and 8 Groschen,1 and a ducat being worth 3 Reichsthaler, we may reduce these two values to one denomination; either to Reichsthaler, which gives the proportion
1 Louis : 1 ducat = 5⅓ Reichsthaler : 3 Reichsthaler = 16:9;
or to Groschen, in which case we have
1 Louis : 1 ducat = 128:72 = 16:9,
which proportions evidently give the true relation of the Louis d’or to the ducat; for the equality of the products of the extremes and the means gives, in both cases,
9 Louis d’or = 16 ducats;
and, by means of this comparison, we may change any sum of Louis d’or into ducats, and vice-versa. Thus, suppose it were required to find how many ducats there are in 1000 Louis d’or, we have this proportion:
9 Louis : 1000 Louis ∷ 16 ducats : 1777⁷⁄₉ ducats,
the number sought. If, on the contrary, it were required to find how many Louis d’or there are in 1000 ducats, we have the following proportion:
16 ducats : 1000 ducats ∷ 9 Louis : 562½ Louis,
the answer.
479 At St. Petersburg the value of the ducat varies, and depends on the course of exchange; which course determines the value of the ruble in Dutch stuivers, 105 of which make a ducat. So that when the exchange is at 45 stuivers per ruble, we have this proportion
1 ruble : 1 ducat = 45:105 = 3:7;
and hence this equality, 7 rubles = 3 ducats.
Hence again we shall find the value of a ducat in rubles; for
3 ducats : 7 rubles = 1 ducat : 2⅓ rubles;
that is, 1 ducat is equal to 2⅓ rubles.
But if the exchange were at 50 stuivers, the proportion would be,
1 ruble : 1 ducat = 50:105 = 10:21;
which would give 21 rubles = 10 ducats; whence 1 ducat = 2⅒ rubles. Lastly, when the exchange is at 44 stuivers, we have
1 ruble : 1 ducat = 44:105,
and thus2
1 ducat = 2¹⁷⁄₄₄ rubles = 2 rubles 38⁷⁄₁₁ kopeks.
480 It follows also from this, that we may compare different kinds of money, which we have frequently occasion to do in bills of exchange.
Suppose, for example, that a person of St. Petersburg has 1000 rubles to be paid to him at Berlin, and that he wishes to know the value of this sum in ducats at Berlin.
The exchange is at 47½; that is to say, one ruble makes 47½ stuivers; and in Holland, 20 stuivers make a Dutch guilder; 2½ guilders make a Dutch rijksdaalder: also, the exchange of Holland with Berlin is at 142; that is to say, for 100 Dutch rijksdaalder, 142 Reichsthaler are paid at Berlin; and lastly, the ducat is worth 3 Reichsthaler at Berlin.3
481 To resolve the question proposed, let us proceed step by step. Beginning therefore with the stuivers, since 1 ruble = 47½ stuivers, or 2 rubles = 95 stuivers, we shall have
2 rubles : 1000 rubles ∷ 95 stuivers : 47500 stuivers;
then again,
20 stuivers : 47500 stuivers ∷ 1 guilder : 2375 guilders
Also, since 2½ Dutch guilders = 1 Dutch rijksdaalder, or 5 Dutch guilders = 2 Dutch rijksdaalder; we shall have
5 guilders : 2375 guilders ∷ 2 rijksdaalder : 950 rijksdaalder.
Then, taking the Reichsthaler of Berlin, according to the exchange, at 142, we shall have
100 rijksdaalder : 950 rijksdaalder ∷ 142 Reichsthaler : 1349 Reichsthaler
And lastly,
3 Reichsthaler : 1349 Reichsthaler ∷ 1 ducat : 449⅔ ducats,
which is the number sought.
482 Now, in order to render these calculations still more complete, let us suppose that the Berlin banker refuses, under some pretext or other, to pay this sum, and to accept the bill of exchange without five per cent. discount; that is, paying only 100 instead of 105. In that case, we must make use of the following proportion:
105 : 100 ∷ 449⅔ : 428¹⁶⁄₆₃ ducats;
which is the answer under those conditions.
483 We have shown that six operations are necessary in making use of the Rule of Three; but we can greatly abridge those calculations by a rule which is called the Rule of Reduction, or Double Rule of Three. To explain which, we shall first consider the two antecedents of each of the six preceding operations:
1st 2 rubles : 95 stuivers. 2nd 20 stuivers : 1 Dutch guilder 3rd 5 Dutch guilders : 2 Dutch rijksdaalder 4th 100 Dutch rijksdaalder : 142 Reichsthaler 5th 3 Reichsthaler : 1 ducat 6th 105 ducats : 100 ducats
If we now look over the preceding calculations, we shall observe, that we have always multiplied the given sum by the third terms, or second antecedents, and divided the products by the first: it is evident, therefore, that we shall arrive at tlie same results bv multiplying at once the sum proposed by the product of all the third terms, and dividing by the product of all the first terms: or, which amounts to the same thing, that we have only to make the following proportion: As the product of all the first terms, is to the given number of rubles, so is the product of all the second terms, to the number of ducats payable at Berlin.
484 This calculation is abridged still more, when amongst the first terms some are found that have common divisors with the second or third terms; for, in this case, we destroy those terms, and substitute the quotient arising from the division by that common divisor. The preceding example will, in this manner, assume the following form.
$(2\cdot 20\cdot 5\cdot 100\cdot 3\cdot 105) : 1000 :: (95\cdot 2\cdot 142\cdot 100) : \dfrac{1000\cdot 95\cdot 2\cdot 142\cdot 100}{2\cdot 20\cdot 5\cdot 100\cdot 3\cdot 105};$
and after cancelling the common divisors in the numerator and denominator, this will become
$\dfrac{10\cdot 19\cdot 142}{3\cdot 21} = \frac{26980}{63} = 428\frac{16}{63} \, \textrm{ducats},$
as before.
485 The method which must be observed in using the Rule of Reduction is this: we begin with the kind of money in question, and compare it with another which is to begin the next relation, in which we compare this second kind with a third, and so on. Each relation, therefore, begins with the same kind as the preceding relation ended with; and the operation is continued till we arrive at the kind of money which the answer requires; at the end of which we must reckon the fractional remainders.
486 Let us give some other examples, in order to facilitate the practice of this calculation.
If ducats gain at Hamburg 1 per cent., on 2 Reichsthaler Banco (that is to say, if 50 ducats are worth, not 100, but 101 Reichsthaler Banco), and if the exchange between Hamburg and Konigsberg is 119 Polish grosz (that is, if 1 Reichsthaler Banco is equal to 119 Polish grosz), how many Polish złoty are equivalent to 1000 ducats (it being understood that 30 Polish grosz make 1 Polish złoty)?
1. 50 ducats = 101 Reichsthaler Banco
2. 1 Reichsthaler Banco = 119 grosz
3. 30 grosz = 1 złoty
Therefore 1500 ducats = 12019 złoty.
hence 500 ducats = 4006⅓ złoty. Therefore, 1000 ducats = 8012⅔ złoty
487 We will propose another example, which may still farther illustrate this method.
Ducats of Amsterdam are brought to Leipzig, having in the former city the value of 5 guilders 4 stuivers current (that is to say, 1 ducat is worth 104 Dutch stuivers, and 5 ducats are worth 26 Dutch guilders). If, therefore, the agio of the bank at Amsterdam is 5 per cent. (that is, if 105 current are equal to 100 banco),4 and if the exchange from Leipzig to Amsterdam, in bank money, is 133¼ per cent. (that is, if for 100 Dutch rijksdaalder banco we pay at Leipzig 133¼ Thaler current); and lastly, 2 Dutch rijksdaalder making 5 Dutch guilder; it is required to determine how many Thaler we must pay at Leipzig, according to these exchanges, for 1000 ducats?
1. 5 ducats = 26 guilders current
2. 105 guilders current = 100 guilders banco; so 21 guilders current = 20 guilders banco
3. 5 guilder banco = 2 rijksdaalder banco
4. 100 rijksdaalder banco = 133¼ Thaler current in Leipzig; so 400 rijksdaalder banco = 533 Thaler current in Leipzig
Therefore 210000 ducats = 554320 Thaler in Leipzig; so 21000 ducats = 55432 Thaler in Leipzig.
Thus dividing by 3,
7000 ducats = 18477⅓ Thaler in Leipzig;
and dividing by 7,
1000 ducats = 2639⁴⁄₇ + ¹⁄₂₁ Thaler in Leipzig = 2639¹³⁄₂₁ Thaler in Leipzig.
#### Editions
1. Leonhard Euler. Elements of Algebra. Translated by Rev. John Hewlett. Third Edition. Longmans, Hurst, Rees, Orme, and Co. London. 1822.
2. Leonhard Euler. Vollständige Anleitung zur Algebra. Mit den Zusätzen von Joseph Louis Lagrange. Herausgegeben von Heinrich Weber. B. G. Teubner. Leipzig and Berlin. 1911. Leonhardi Euleri Opera omnia. Series prima. Opera mathematica. Volumen primum.
1. 1 Reichsthaler = 24 Groschen.
2. 1 ruble = 100 kopeks.
3. 1 Dutch rijksdaalder = 2½ Dutch guilders.
4. Wiktionary: “agio”: “The premium or percentage on a better sort of money when it is given in exchange for an inferior sort.”
|
2023-04-01 11:21:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6281875967979431, "perplexity": 5558.986043584625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949958.54/warc/CC-MAIN-20230401094611-20230401124611-00463.warc.gz"}
|
https://www.tutorialspoint.com/find-the-minimum-number-of-moves-needed-to-move-from-one-cell-of-matrix-to-another-in-python
|
# Find the minimum number of moves needed to move from one cell of matrix to another in Python
PythonServer Side ProgrammingProgramming
Suppose we have one N X N matrix M, and this is filled with 1, 0, 2, 3, We have to find the minimum numbers of moves required to move from source cell to destination cell. While visiting through blank cells only, we can visit up, down, right and left.
• Cell with value 1 indicates Source.
• Cell with value 2 indicates Destination.
• Cell with value 3 indicates Blank cell.
• Cell with value 0 indicates Blank Wall.
There will be only one source and only one destination cells. There may be more than one path to reach destination from source cell. Now, each move in matrix we consider as '1'.
So, if the input is like
3 3 1 0 3 0 3 3 3 3 0 3 0 3 2 3
then the output will be 5,
3 3 1 0 3 0 3 3 3 3 0 3 0 3 2 3
From start to destination the green path is shortest.
To solve this, we will follow these steps −
• nodes := order * order + 2
• g := a blank graph with ‘nodes’ number of vertices
• k := 1
• for i in range 0 to order, do
• for j in range 0 to order, do
• if mat[i, j] is not same as 0, then
• if is_ok (i , j + 1 , mat) is non-zero, then
• create an edge between k and k + 1 nodes of g
• if is_ok (i , j - 1 , mat) is non-zero, then
• create an edge between k, k - 1 nodes of g
• if j < order - 1 and is_ok (i + 1 , j , mat) is non-zero, then
• create an edge between k, k + order nodes of g
• if i > 0 and is_ok (i - 1 , j , mat) is non-zero, then
• create an edge between k, k - order nodes of g
• if mat[i, j] is same as 1, then
• src := k
• if mat[i, j] is same as 2, then
• dest := k
• k := k + 1
• return perform bfs from src to dest of g
## Example
Let us see the following implementation to get better understanding −
Live Demo
class Graph:
def __init__(self, nodes):
self.nodes = nodes
self.adj = [[] for i in range(nodes)]
def insert_edge (self, src , dest):
def BFS(self, src, dest):
if (src == dest):
return 0
level = [-1] * self.nodes
queue = []
level[src] = 0
queue.append(src)
while (len(queue) != 0):
src = queue.pop()
i = 0
i += 1
return level[dest]
def is_ok(i, j, mat):
global order
if ((i < 0 or i >= order) or (j < 0 or j >= order ) or mat[i][j] == 0):
return False
return True
def get_min_math(mat):
global order
src , dest = None, None
nodes = order * order + 2
g = Graph(nodes)
k = 1
for i in range(order):
for j in range(order):
if (mat[i][j] != 0):
if (is_ok (i , j + 1 , mat)):
g.insert_edge (k , k + 1)
if (is_ok (i , j - 1 , mat)):
g.insert_edge (k , k - 1)
if (j < order - 1 and is_ok (i + 1 , j , mat)):
g.insert_edge (k , k + order)
if (i > 0 and is_ok (i - 1 , j , mat)):
g.insert_edge (k , k - order)
if(mat[i][j] == 1):
src = k
if (mat[i][j] == 2):
dest = k
k += 1
return g.BFS (src, dest)
order = 4
mat = [[3,3,1,0], [3,0,3,3], [3,3,0,3], [0,3,2,3]]
print(get_min_math(mat))
## Input
[[3,3,1,0], [3,0,3,3], [3,3,0,3], [0,3,2,3]]
## Output
0
Published on 20-Aug-2020 08:06:46
|
2022-01-24 23:51:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5014082789421082, "perplexity": 3899.9637565917487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304686.15/warc/CC-MAIN-20220124220008-20220125010008-00223.warc.gz"}
|
https://math.stackexchange.com/questions/1904655/verify-this-identity-sin6-alpha-sin2-alpha-frac-cos22-alpha-cos2-alp
|
# Verify this identity: $\sin6\alpha + \sin2\alpha = \frac{\cos^22\alpha-\cos2\alpha\cos6\alpha}{\sin2\alpha}$
Say I want to verify the following identity:
$$\sin6\alpha + \sin2\alpha = \frac{\cos^22\alpha-\cos2\alpha\cos6\alpha}{\sin2\alpha}$$
I'm working on the first member of the identity so that it will eventually match the second one.
$$\sin6\alpha + \sin2\alpha =$$
$$\sin(2\times3\alpha)+\sin2\alpha=$$
$$2\sin3\alpha\cos3\alpha+2\sin\alpha\cos\alpha$$
That doesn't to be taking me anywhere. Any hints?
• Try to simplify from RHS rather than LHS – Zack Ni Aug 26 '16 at 19:30
The identity is equivalent (for $\sin2\alpha\ne0$) to $$\sin^22\alpha+\sin2\alpha\sin6\alpha-\cos^22\alpha+\cos2\alpha\cos6\alpha=0$$ Note that $$\cos2\alpha\cos6\alpha+\sin2\alpha\sin6\alpha=\cos(6\alpha-2\alpha) =\cos4\alpha$$ and $$\cos^22\alpha-\sin^22\alpha=\cos4\alpha$$
• Wow, that makes it really easy! – John Aug 26 '16 at 20:08
• Always fun, those trig identities...+1 – imranfat Aug 26 '16 at 21:28
Use the triple angle formulas:
$$\cos 6\alpha = 4 \cos^3 2\alpha - 3\cos 2\alpha$$ $$\sin 6\alpha = -4 \sin^3 2\alpha + 3\sin 2\alpha$$
$$\frac{\cos^22\alpha-\cos2\alpha\cos6\alpha}{\sin2\alpha}=\frac{\cos 2\alpha(\cos2\alpha-\cos6\alpha)}{\sin2\alpha}\\ \qquad\qquad\qquad\qquad\qquad\,\,\,\,\,=\frac{2\cos 2\alpha(\sin2\alpha\times\sin 4\alpha)}{\sin2\alpha}\\ \qquad\qquad\qquad\,=2\cos 2\alpha\sin 4\alpha\\ \\ \\ \,\,\,\qquad\qquad\qquad=\sin 6\alpha+\sin 2\alpha$$ For more details, Check the list of trigonometric identities
There's a general fact that can be useful in these problems: Expressed as functions of $\cos\theta$, the trigonometric expressions $\cos n\theta$ and $\frac{\sin {(n\theta+\theta)}}{\sin \theta}$ can be expressed as polynomials. (Specifically, they'll be the $n$th Chebyshev polynomials of the first and second kind respectively.)
With this in mind, if we divide both sides of the desired identity by $\sin2\alpha$, we have $$\frac{\sin6\alpha}{\sin 2\alpha} + 1 = \frac{\cos^22\alpha-\cos2\alpha\cos6\alpha}{\sin^2 2\alpha} = \frac{\cos 2\alpha-\cos6\alpha}{1-\cos^2 2\alpha}\cos2\alpha.$$ But the above principle indicates we can express both sides in terms of $\cos 2\alpha$, and indeed \begin{align} \frac{\sin6\alpha}{\sin 2\alpha}+1 &=(4\cos^2 2\alpha-1)+1\\ &=4\cos^2 2\alpha,\\ \frac{\cos 2\alpha-\cos6\alpha}{1-\cos^2 2\alpha}\cos2\alpha &=\frac{\cos 2\alpha-(4\cos^3 2\alpha-3 \cos2\alpha )}{1-\cos^2 2\alpha}\cos2\alpha\\ &=\frac{4\cos 2\alpha(1-\cos^2 2\alpha)}{1-\cos^2 2\alpha}\cos 2\alpha\\ &=4\cos^2 2\alpha. \end{align} (In each case the first equality amounts to a triple angle identity). So the two sides indeed agree.
• I should hasten to add that the above strategy doesn't have the virtue of efficiency. The main advantage it has is that, once one has written things in terms of $\cos 2\alpha$, one is working with polynomials and rational functions which are easier to simplify and rearrange. – Semiclassical Aug 26 '16 at 20:17
|
2021-05-13 06:22:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.971103310585022, "perplexity": 585.0703631049231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991537.32/warc/CC-MAIN-20210513045934-20210513075934-00429.warc.gz"}
|
https://zbmath.org/?q=an:0729.58044
|
# zbMATH — the first resource for mathematics
Periodic solutions of dynamical systems by a saddle point theorem of Rabinowitz. (English) Zbl 0729.58044
The author considers the problem of finding periodic solutions of differential equations of the type $(1)\quad \ddot x+\nabla_ xV(t,x)=0\quad (x\in {\mathbb{R}}^ n,\quad V\in C^ 1({\mathbb{R}}\times {\mathbb{R}}^ n,{\mathbb{R}}),\quad V(t+T,x)=V(t,x)).$ Using some special cases of linking techniques [see, for instance, V. Benci and P. H. Rabinowitz, Invent. Math. 52, 241-273 (1979; Zbl 0465.49006) and P. H. Rabinowitz, Lect. Notes Math. 648, 97-115 (1978; Zbl 0377.35020)] he establishes some sufficient conditions for periodic solutions to exist:
Theorem 1. If V(t,x)$$\leq 0\forall (t,x)\in {\mathbb{R}}\times {\mathbb{R}}^ n$$, $$V\not\equiv 0$$, V(t,x)$$\to 0$$ and $$\nabla_ xV(t,x)\to 0$$ as $$\| x\| \to \infty$$ uniformly in t, then equation (1) has at least one T- periodic solution x(t) such that $$V(t,x(t))<0$$ for some t and there exists a sequence of distinct subharmonic solutions.
Theorem 2. If $$V(t,x)\to +\infty$$ as $$\| x\| \to \infty$$ uniformly in t and $$\nabla_ xV(t,x)$$ is bounded, then for every $$k\in N$$ there exists a kT-periodic solution $$x_ k(t)$$ such that $$\| x_ k\|_{L\infty}\to +\infty$$ as $$k\to +\infty.$$
A generalization of Theorem 1, in which V(t,x) is bounded and may change its sign is considered, too.
##### MSC:
37G99 Local and nonlocal bifurcation theory for dynamical systems 35B10 Periodic solutions to PDEs
Full Text:
##### References:
[1] {\scAmbrosetti} A. & {\scCoti}{\scZelati} V., Critical points with lack of compactness and singular dynamical systems, Annali Mat. pura appl. (to appear). · Zbl 0642.58017 [2] {\scAmbrosetti} A. & {\scCoti}{\scZelati} V., Non collision orbits for a class of Keplerian potentials, C.r. hebd. Séanc. Acad. Sci. Paris (to appear). [3] {\scBahri} A. & {\scRabinowitz} P. H., & Periodic solutions for a class of singular Hamiltonian systems, Proc. Nonlinear Analysis and Variational Problems, Isola d’Elba, Italy, September 1986 (to appear). [4] Benci, V.; Fortunato, D., Subharmonic solutions of prescribed minimal period for non autonomous differential equations, Proc. int. conf. on recent advances in Hamiltonian systems, univ. of L’aquila, (1986) [5] Benci, V.; Rabinowitz, P.H., Critical points for indefinite functionals, Inventiones math., 52, (1979) · Zbl 0465.49006 [6] Capozzi, A.; Fortunato, D.; Salvatore, A., Periodic solutions of dynamical systems, Atti vii congresso AIMETA, (1984) · Zbl 0599.70010 [7] Capozzi, A.; Fortunato, D.; Salvatore, A., Periodic solutions of Lagrangian systems with bounded potential, J. math. analysis applic., 123, (1987) · Zbl 0664.34053 [8] Coti Zelati, V., Periodic solutions of dynamical systems with bounded potential, J. diff. eqns, 67, (1987) · Zbl 0646.34049 [9] Degiovanni, M.; Giannoni, A.; Marino, A., Periodic solutions of dynamical systems with Newtonian type potentials, Proc. NATO ARW periodic solutions of Hamiltonian systems and related topics, II ciocco, Italy, (1986) [10] {\scDegiovanni} M., {\scGiannoni} F. & {\scMarino} A., Periodic solutions of dynamical systems with Newtonian type potentials Atti Accad. naz. Lincei, RC. Sci. Fis. Mat. Nat. (to appear). · Zbl 0632.34038 [11] {\scDegiovanni} M. & {\scGiannoni} F., Periodic solutions of dynamical systems with Newtonian type potentials, Annali Scu. norm. sup. Pisa (to appear). · Zbl 0632.34038 [12] Michaelek, R.; Tarantello, G., Subharmonics with prescribed minimal period for Hamiltonian systems, Proc. int. conf. on recent advances in Hamiltonian systems, univ. of L’aquila, (1986) · Zbl 0667.34059 [13] Rabinowitz, P.H., A minimax principle and applications to elliptic partial differential equations, () · Zbl 0152.10003 [14] Rabinowitz, P.H., On subharmonic solutions of Hamiltonian systems, Communs pure appl. math., 23, (1980) · Zbl 0425.34024 [15] Rabinowitz, P.H., Minimax methods in critical point theory with applications to differential equations, Reg. conf. ser. math., am. math. soc., 65, (1986) · Zbl 0609.58002 [16] Schwartz, J.T., Nonlinear functional analysis, (1969), Gordon and Breach New York · Zbl 0203.14501 [17] Thews, K., T-periodic solutions of time dependent Hamiltonian systems with a potential vanishing at infinity, Manuscripta math., 33, (1981) · Zbl 0467.35009 [18] Willem, M., Subharmonic oscillations of nonlinear systems, () · Zbl 0579.34030
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-10-18 10:03:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7032980918884277, "perplexity": 3381.351308110713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00280.warc.gz"}
|
http://dictionnaire.sensagent.leparisien.fr/Reactivity%20(chemistry)/en-en/
|
Publicité ▼
## définition - Reactivity (chemistry)
voir la définition de Wikipedia
Wikipedia
# Reactivity (chemistry)
Reactivity in chemistry refers to
• the chemical reactions of a single substance,
• the chemical reactions of two or more substances that interact with each other,
• the systematic study of sets of reactions of these two kinds,
• methodology that applies to the study of reactivity of chemicals of all kinds,
• experimental methods that are used to observe these processes,
• theories to predict and to account for these processes.
The chemical reactivity of a single substance (reactant) covers its behaviour in which
• it decomposes,
• it forms new substances by addition of atoms from another reactant or reactants,
• reactions in which it interacts with two or more other reactants to form two or more products.
The chemical reactivity of a substance can refer to
• the variety of circumstances (conditions that include temperature, pressure, presence of catalysts) in which it reacts, in combination with
• the variety of substances with which it reacts,
• the equilibrium point of the reaction (i.e. the extent to which all of it reacts),
• the rate of the reaction.
A responsible discussion of chemical reactivity can be found in any standard textbook on physical chemistry.
## An alternative point of view
Reactivity is a somewhat vague concept used in chemistry which appears to embody both thermodynamic factors and kinetic factors i.e. 'whether or not a substance reacts and how fast it reacts'. Both factors are actually distinct and both are commonly temperature dependent. For example, it is commonly asserted that the reactivity of group one metals (Na, K, etc.) increases down the group in the periodic table, or that hydrogen's reactivity is evidenced by its reaction with oxygen. In fact, the rate of reaction of alkali metals (as evidenced by their reaction with water for example) is a function not only of position within the group but particle size; and hydrogen will not react with oxygen even though the equilibrium constant is very large unless a flame initiates the radical reaction which leads to an explosion.
Restriction of the term to refer to reaction rates leads to a more consistent view. Reactivity then refers to the rate at which a chemical substance tends to undergo a chemical reaction in time. In pure compounds, reactivity is regulated by the physical properties of the sample. For instance, grinding a sample to a higher specific surface area increases its reactivity. In impure compounds, the reactivity is also affected by the inclusion of contaminants. In crystalline compounds, the crystalline form can also affect reactivity. However in all cases, reactivity is primarily due to the sub-atomic properties of the compound.
Although it is commonplace to make statements that substance 'X is reactive', all substances react with some reagents and not others. For example, in making the statement that 'sodium metal is reactive', we are alluding to the fact that sodium reacts with many common reagents (including pure oxygen, chlorine, hydrochloric acid, water) and/or that it reacts rapidly with such materials at either room temperature or using a bunsen flame.
'Stability' should not be confused with reactivity. For example, an isolated molecule of an electronically state of the oxygen molecule will spontaneously emit light after a statistically defined period. The half-life of such a species is another manifestation of its stability, but its reactivity can only be ascertained via its reactions with other species.
## Causes of reactivity
The second meaning of 'reactivity', that of whether or not a substance reacts, can be rationalised at the atomic and molecular level using older and simpler valence bond theory and also atomic and molecular orbital theory. Thermodynamically, a chemical reaction occurs because the products (taken as a group) are at a lower free energy than the reactants; the lower energy state is referred to as the 'more stable state'. Quantum chemistry provides the most in-depth and exact understanding of the reason this occurs. Generally, electrons exist in orbitals that are the result of solving the Schrödinger equation for specific situations.
All things (values of the n and ml quantum numbers) being equal, the order of stability of electrons in a system from least to greatest is unpaired with no other electrons in similar orbitals, unpaired with all degenerate orbitals half filled and the most stable is a filled set of orbitals. In order to achieve one of these orders of stability, an atom will react with another atom, thereby stabilizing both atoms. For example, a lone hydrogen atom has a single electron in its 1s orbital. It becomes significantly more stable (as much as 100 kilocalories per mole, or 420 kilojoules per mole) when reacting to form H2.
It is for this same reason that carbon will almost always form four bonds. Its ground state valence configuration is 2s2 2p2, half filled. However, the activation energy to go from half filled to fully filled p orbitals is so small it is negligible, and as such carbon will form them almost instantaneously, meanwhile the process releases a significant amount of energy (exothermic). This four equal bond configuration is called sp3 hybridization.
The above three paragraphs rationalise, albeit very generally, the reactions of some common species, particularly atoms, but chemists have so far been unable to jump from such general considerations to quantitative models of reactivity.
## Chemical kinetics: reaction rate as reactivity
The rate of any given reaction,
Reactants → Products
is governed by the rate law:
$Rate=k*[A]$
where the rate is the change in the molar concentration in one second in the rate-determining step of the reaction (the slowest step), [A] is the product of the molar concentration of all the reactants raised to the correct order, known as the reaction order, and k is the reaction constant, which is constant for one given set of circumstances (generally temperature and pressure) and independent of concentration. The greater the reactivity of a compound the higher the value of k and the higher the rate. For instance, if,
A+B → C+D
Then:
$Rate=k*[A]^n*[B]^m$
where n is the reaction order of A, m is the reaction order of B, n+m is the reaction order of the full reaction, and k is the reaction constant.
Publicité ▼
Contenu de sensagent
• définitions
• synonymes
• antonymes
• encyclopédie
• definition
• synonym
Publicité ▼
dictionnaire et traducteur pour sites web
Alexandria
Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web !
Essayer ici, télécharger le code;
Solution commerce électronique
Augmenter le contenu de votre site
Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML.
Parcourir les produits et les annonces
Obtenir des informations en XML pour filtrer le meilleur contenu.
Indexer des images et définir des méta-données
Fixer la signification de chaque méta-donnée (multilingue).
Renseignements suite à un email de description de votre projet.
Jeux de lettres
Les jeux de lettre français sont :
○ Anagrammes
○ jokers, mots-croisés
○ Lettris
○ Boggle.
Lettris
Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée.
boggle
Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer
Dictionnaire de la langue française
Principales Références
La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés.
Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID).
L'encyclopédie française bénéficie de la licence Wikipedia (GNU).
Changer la langue cible pour obtenir des traductions.
Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent.
10318 visiteurs en ligne
calculé en 0,047s
Je voudrais signaler :
section :
une faute d'orthographe ou de grammaire
un contenu abusif (raciste, pornographique, diffamatoire)
|
2021-01-19 16:34:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.715675413608551, "perplexity": 4737.976847888003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519395.23/warc/CC-MAIN-20210119135001-20210119165001-00042.warc.gz"}
|
http://www.gamedev.net/index.php?app=forums&module=extras§ion=postHistory&pid=4904150
|
• Create Account
### #ActualHodgman
Posted 18 January 2012 - 07:02 PM
Does anyone have any links/tips on packing/unpacking single floats to/from multi-channel texture formats?
I know this thing used to be popular when deferred shading hit the scene, as people would do things like writing linear depth values into RGBA8 render-targets (i.e. splitting a float into 4 8-bit integers, then reconstructing the float afterwards).
I'm trying to do this on SM3, which means no bitwise logic (i.e. |, >>, &, etc).
In my specific case, I'm trying to split a float into four 2-bit values, though the logic should be the same as if I were trying to split it into four 8-bit values.
In both cases below, I'm assuming that the input value data is from 0.0f to 1.0f inclusive.
The code that I keep finding everywhere (converted from 8-bit to 2-bit) is as follows:
//splitting:
float4 shift = float4( 1, 4, 16, 64 );
float4 abcd = frac( data * shift );
abcd -= abcd.xxyz * float4(0, (float3)1/4.0);
//reconstructing:
float reconstructed = dot( abcd, float4(3*64,3*16,3*4,3)/255.0 );
However, this code is buggy. It's almost right, and it looks mostly right when using the 8-bit version, but the flaw becomes much more exaggerated when you bring it down to a 2-bit version. The problem is that frac never returns 1.0, so you skip over certain values (causing banding), and you can never encode the input value of 1.0f.
The code that I came up with is as follows, and it was mostly through a process of trial-and-error that I arrived at this result.
//splitting:
float4 abcd = frac(data * (255.0/float4(256,64,16,4)));
abcd = floor(abcd*4)/3;
//reconstructing:
float reconstructed = dot( abcd, float4(3*64,3*16,3*4,3)/255.0 );
N.B. The magic numbers in use are:
255 = max 8-bit value (2-bit * 4 components)
3 = max 2-bit value
4 = num 2-bit values
1,4,16,64,256 = shift by 0, 2, 4, 6, 8 places.
### #2Hodgman
Posted 18 January 2012 - 07:01 PM
Does anyone have any links/tips on packing/unpacking single floats to/from multi-channel texture formats?
I know this thing used to be popular when deferred shading hit the scene, as people would do things like writing linear depth values into RGBA8 render-targets (i.e. splitting a float into 4 8-bit integers, then reconstructing the float afterwards).
I'm trying to do this on SM3, which means no bitwise logic (i.e. |, >>, &, etc).
In my specific case, I'm trying to split a float into four 2-bit values, though the logic should be the same as if I were trying to split it into four 8-bit values.
The code that I keep finding everywhere (converted from 8-bit to 2-bit) is as follows:
//splitting:
float4 shift = float4( 1, 4, 16, 64 );
float4 abcd = frac( data * shift );
abcd -= abcd.xxyz * float4(0, (float3)1/4.0);
//reconstructing:
float reconstructed = dot( abcd, float4(3*64,3*16,3*4,3)/255.0 );
However, this code is buggy. It's almost right, and it looks mostly right when using the 8-bit version, but the flaw becomes much more exaggerated when you bring it down to a 2-bit version. The problem is that frac never returns 1.0, so you skip over certain values (causing banding), and you can never encode the input value of 1.0f.
The code that I came up with is as follows, and it was mostly through a process of trial-and-error that I arrived at this result.
//splitting:
data = saturate(data);
float4 abcd = frac(data * (255.0/float4(256,64,16,4)));
abcd = floor(abcd*4)/3;
//reconstructing:
float reconstructed = dot( abcd, float4(3*64,3*16,3*4,3)/255.0 );
N.B. The magic numbers in use are:
255 = max 8-bit value (2-bit * 4 components)
3 = max 2-bit value
4 = num 2-bit values
1,4,16,64,256 = shift by 0, 2, 4, 6, 8 places.
### #1Hodgman
Posted 18 January 2012 - 07:00 PM
Does anyone have any links/tips on packing/unpacking single floats to/from multi-channel texture formats?
I know this thing used to be popular when deferred shading hit the scene, as people would do things like writing linear depth values into RGBA8 render-targets (i.e. splitting a float into 4 8-bit integers, then reconstructing the float afterwards).
I'm trying to do this on SM3, which means no bitwise logic (i.e. |, >>, &, etc).
In my specific case, I'm trying to split a float into four 2-bit values, though the logic should be the same as if I were trying to split it into four 8-bit values.
The code that I keep finding everywhere (converted from 8-bit to 2-bit) is as follows:
//splitting:
float4 shift = float4( 1, 4, 16, 64 );
float4 abcd = frac( data * shift );
abcd -= abcd.xxyz * float4(0, (float3)1/4.0);
//reconstructing:
float reconstructed = dot( abcd, float4(3*64,3*16,3*4,3)/255.0 );[/quote]However, this code is buggy. It's almost right, and it looks mostly right when using the 8-bit version, but the flaw becomes much more exaggerated when you bring it down to a 2-bit version. The problem is that frac never returns 1.0, so you skip over certain values (causing banding), and you can never encode the input value of 1.0f.
The code that I came up with is as follows, and it was mostly through a process of trial-and-error that I arrived at this result.[code]//splitting:
data = saturate(data);
float4 abcd = frac(data * (255.0/float4(256,64,16,4)));
abcd = floor(abcd*4)/3;
//reconstructing:
float reconstructed = dot( abcd, float4(3*64,3*16,3*4,3)/255.0 );
N.B. The magic numbers in use are:
255 = max 8-bit value (2-bit * 4 components)
3 = max 2-bit value
4 = num 2-bit values
1,4,16,64,256 = shift by 0, 2, 4, 6, 8 places.
PARTNERS
|
2013-12-19 22:51:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19215060770511627, "perplexity": 3771.7006453106774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345768537/warc/CC-MAIN-20131218054928-00065-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://hal.inria.fr/inria-00070381
|
HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information
# Optimal Solution of the Maximum All Request Path Grooming Problem
1 MASCOTTE - Algorithms, simulation, combinatorics and optimization for telecommunications
CRISAM - Inria Sophia Antipolis - Méditerranée , Laboratoire I3S - COMRED - COMmunications, Réseaux, systèmes Embarqués et Distribués
Abstract : We give an optimal solution to the Maximum All Request Path Grooming (MARPG) problem motivated by a traffic grooming application. The MARPG problem consists in finding the maximum number of connections which can be established in a path of size $N$, where each arc has a capacity or bandwidth $C$ (grooming factor). We present a greedy algorithm to solve the problem and an explicit formula for the maximum number of requests that can be groomed. In particular, if $C = s(s+1)/2$ and $N > s(s-1)$, an optimal solution is obtained by taking all the requests of smallest length, that is of length 1 to $s$. However this is not true in general since anomalies can exist. We give a complete analysis and the exact number of such anomalies.
Keywords :
Document type :
Reports
Domain :
Cited literature [14 references]
https://hal.inria.fr/inria-00070381
Contributor : Rapport de Recherche Inria Connect in order to contact the contributor
Submitted on : Friday, May 19, 2006 - 8:19:21 PM
Last modification on : Friday, February 4, 2022 - 3:11:40 AM
Long-term archiving on: : Sunday, April 4, 2010 - 9:04:41 PM
### Identifiers
• HAL Id : inria-00070381, version 1
### Citation
Jean-Claude Bermond, Michel Cosnard, David Coudert, Stéphane Pérennes. Optimal Solution of the Maximum All Request Path Grooming Problem. [Research Report] RR-5627, INRIA. 2006, pp.12. ⟨inria-00070381⟩
Record views
|
2022-05-22 04:48:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3788740038871765, "perplexity": 2999.0468351814256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543797.61/warc/CC-MAIN-20220522032543-20220522062543-00186.warc.gz"}
|
http://blog.bogatron.net/blog/2014/02/02/visualizing-dirichlet-distributions/
|
# Visualizing Dirichlet Distributions with Matplotlib
This post describes how I went about visualizing probability density functions of 3-dimensional Dirichlet distributions with matplotlib. If you're already familiar with the Dirichlet distribution, you might want to skip the next section.
## Rolling Dice
To understand what the Dirichlet distribution describes, it is useful to consider how it can characterize the variability of a random multinomial distribution. Suppose we are going to manufacture 6-sided dice. But for this example, we only want the allowable outcome of a die roll to be the number 1, 2, or 3. So when we produce a die, we will place the number 1 on two of the faces and likewise for the numbers 2 and 3. If we produce a "fair" die, then the probabilites of the three outcomes will be equal (the probability for each outcome will 1/3). We can represent the probabilities for the possible outcomes as a vector ($\boldsymbol{\theta}$):
$\boldsymbol{\theta} = \left( \frac{1}{3}, \frac{1}{3}, \frac{1}{3} \right)$
If we wanted to cheat, we could produce a "loaded" die, that gives us a higher probability of rolling a 3:
$\boldsymbol{\theta} = \left( \frac{0.9}{3}, \frac{1}{3}, \frac{1.1}{3} \right)$
Regardless of whether we produce a fair or loaded die, $\boldsymbol{\theta}$ has two imporant properties. First, the sum of probabilities for each outcome must equal 1:
$\sum_{i} \boldsymbol{\theta}_{i} = \boldsymbol{\theta}_{1} + \boldsymbol{\theta}_{2} + \boldsymbol{\theta}_{3} = 1$
The second property is that none of the probabilities can be negative. When these conditions hold, probabilities associated with the results of rolling the die are described by a multinomial distribution.
Regardless of whether we are attempting to produce fair or loaded dice, there will be some variability in the characteristics of the dice we produce. So even if we try to produce fair dice, we won't expect that the probabilities of each outcome for a particular die will be exactly 1/3. If we were to hand-carve each die from a wooden block, we would expect significant variability in the dice we produce, due to the limitations of our craftsmanship, variability in the density of the wood, characteristics of the tools we use, etc. If instead, we were to use a fancy 3D printer to manufacture our dice, we would expect the resulting dice to be much more precise, having significantly less variability than our hand-carved dice.
To characterize this variability mathematically, we would like to know the probability (probability density) of every possible value of $\boldsymbol{\theta}$ for a given manufacturing process. To do this, let's consider each element of $\boldsymbol{\theta}$ as being an independent variable. That is, for $\boldsymbol{\theta} = \lbrace \theta_{1} , \theta_{2} , \theta_{3} \rbrace$, we can treat $\theta_{1}$, $\theta_{2}$, and $\theta_{3}$ each as an independent variable and $\boldsymbol{\theta}$ as a vector in a 3-dimensional space. Since the multinomial distribution requires that these three variables sum to 1, we know that the allowable values of $\boldsymbol{\theta}$ are confined to a plane. Furthermore, since each value $\theta_{i}$ must be greater than or equal to zero, the set of all allowable values of $\boldsymbol{\theta}$ is confined to an equilateral triangle (a 2-simplex) as shown below.
What we want is to know the probability density at each point on this triangle. That is where the Dirichlet distribution can help us. The Dirichlet distribution defines a probability density for a vector-valued input having the same characteristics as our multinomial parameter ($\boldsymbol{\theta}$). Let's start with the formula for the probability density associated with a Dirichlet distribution:
$\text{Dir}\left(\boldsymbol{\alpha}\right)\rightarrow \mathrm{p}\left(\boldsymbol{\theta}\mid\boldsymbol{\alpha}\right)=\frac{\Gamma\left(\sum_{i=1}^{k}\boldsymbol{\alpha}_{i}\right)}{\prod_{i=1}^{k}\Gamma\left(\boldsymbol{\alpha}_{i}\right)}\prod_{i=1}^{k}\boldsymbol{\theta}_{i}^{\boldsymbol{\alpha}_{i}-1}$
It's not important to understand the the formula in its entirety but a few details are worth noting. The Dirichlet distribution is parameterized by the vector $\boldsymbol{\alpha}$, which has the same number of elements ($k$) as our multinomial parameter $\boldsymbol{\theta}$. In our case, $k=3$. So you can interpret $\mathrm{p}\left(\boldsymbol{\theta}\mid\boldsymbol{\alpha}\right)$ as answering the question "what is the probability density associated with multinomial distribution $\boldsymbol{\theta}$, given that our Dirichlet distribution has parameter $\boldsymbol{\alpha}$." Our goal is to visualize how $\text{Dir}\left(\boldsymbol{\alpha}\right)$ varies over the simplex of allowable values of $\boldsymbol{\theta}$ for a given value of $\boldsymbol{\alpha}$.
## Setting up the Code
Before we can plot our Dirichlet distributions, we need to do three things:
1. Generate a set of x-y coordinates over our equilateral triangle
2. Map the x-y coordinates to the 2-simplex coordinate space
3. Compute $\text{Dir}\left(\boldsymbol{\alpha}\right)$ for each point
There are numerous ways we could go about generating a set of points over the simplex. For example, we could randomly sample a uniform Dirichlet and map those points back to the x-y space. But that approach can result in undesirable display artifacts near the simplex boundaries for small values of $\boldsymbol{\alpha}$. As of version 1.3, matplotlib provides some convenient functions we can use for generating and refining triangular meshes. Let's start with the corners of our equilateral triangle and define a triangular mesh consisting of just a single element.
In [1]:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.tri as tri
corners = np.array([[0, 0], [1, 0], [0.5, 0.75**0.5]])
triangle = tri.Triangulation(corners[:, 0], corners[:, 1])
Next, we'll subdivide the triangle into a triangular mesh and plot the original and subdivided triangles.
In [2]:
refiner = tri.UniformTriRefiner(triangle)
trimesh = refiner.refine_triangulation(subdiv=4)
plt.figure(figsize=(8, 4))
for (i, mesh) in enumerate((triangle, trimesh)):
plt.subplot(1, 2, i+ 1)
plt.triplot(mesh)
plt.axis('off')
plt.axis('equal')
The mesh above is just an example - we'll use a much finer mesh for our plots.
Next, we need to map the x-y coordinates of the mesh vertices to the simplex coordinate space (a.k.a. barycentric coordinates). There are several ways to perform this transform. I will use a simple method that uses vector algebra. For some values of $\boldsymbol{\alpha}$, calculation of the Dirichlet pdf can become numerically unstable at the boundaries of the simplex so our conversion function will take an optional tolerance that will avoid barycentric coordinate values directly on the simplex boundary.
In [3]:
# Mid-points of triangle sides opposite of each corner
midpoints = [(corners[(i + 1) % 3] + corners[(i + 2) % 3]) / 2.0 \
for i in range(3)]
def xy2bc(xy, tol=1.e-3):
'''Converts 2D Cartesian coordinates to barycentric.'''
s = [(corners[i] - midpoints[i]).dot(xy - midpoints[i]) / 0.75 \
for i in range(3)]
return np.clip(s, tol, 1.0 - tol)
Having the barycentric coordinates for our simplex mesh vertices, we then need to compute the Dirichlet pdf values for these coordinates.
In [4]:
class Dirichlet(object):
def __init__(self, alpha):
from math import gamma
from operator import mul
self._alpha = np.array(alpha)
self._coef = gamma(np.sum(self._alpha)) / \
reduce(mul, [gamma(a) for a in self._alpha])
def pdf(self, x):
'''Returns pdf value for x.'''
from operator import mul
return self._coef * reduce(mul, [xx ** (aa - 1)
for (xx, aa)in zip(x, self._alpha)])
And lastly, we just need a function to draw the contours for a distribution. The function will draw pdf contours for a Dirichlet distribution where $\theta_{1}$, $\theta_{2}$, and $\theta_{3}$ correspond to the lower left, lower right, and upper corners of the rendered triangle, respectively.
In [5]:
def draw_pdf_contours(dist, nlevels=200, subdiv=8, **kwargs):
import math
refiner = tri.UniformTriRefiner(triangle)
trimesh = refiner.refine_triangulation(subdiv=subdiv)
pvals = [dist.pdf(xy2bc(xy)) for xy in zip(trimesh.x, trimesh.y)]
plt.tricontourf(trimesh, pvals, nlevels, **kwargs)
plt.axis('equal')
plt.xlim(0, 1)
plt.ylim(0, 0.75**0.5)
plt.axis('off')
## Output
Let's start with a few symmetric Dirichlet distributions. The case of $\boldsymbol{\alpha} = (1, 1, 1)$ yields a uniform distribution, where all points on the simplex are equally probable:
In [6]:
draw_pdf_contours(Dirichlet([1, 1, 1]))
For values of $\alpha_{i} < 1$, the distribution concentrates in the corners and along the boundaries of the simplex.
In [7]:
draw_pdf_contours(Dirichlet([0.999, 0.999, 0.999]))
The color scale runs from dark blue (lowest values) to red (highest values). Notice in the plot above that only the corners of the simplex are red and intermediate values lie along the boundary of the simplex.
For values of $\alpha_{i} > 1$, the distribution tends toward the center of the simplex.
In [8]:
draw_pdf_contours(Dirichlet([5, 5, 5]))
As $\alpha_{i}$ increases, the distribution becomes more tightly concentrated around the center of the simplex.
In [9]:
draw_pdf_contours(Dirichlet([50, 50, 50]))
In the context of our example of manufacturing dice, we would produce consistently fair dice in the limit as $\alpha_{i} \rightarrow \infty$.
For a symmetric Dirichlet with $\alpha_{i} > 1$, we will produce fair dice, on average. If the goal is to produce loaded dice (e.g., with a higher probability of rolling a 3), we would want an asymmetric (noncentral) Dirichlet distribution with a higher value for $\alpha_{3}$.
In [10]:
draw_pdf_contours(Dirichlet([30, 30, 50]))
Finally, here are a couple more, just to show how the distributions vary with $\boldsymbol{\alpha}$.
In [11]:
draw_pdf_contours(Dirichlet([1, 2, 3]))
In [12]:
draw_pdf_contours(Dirichlet([2, 5, 15]))
If you would like to create these plots yourself, you can download a more readily usable version of the code used here.
As a final note, while I used the Dirichlet distribution as the prior for the multinomial distribution, it is not the only prior distribution that could be used. For example, the logistic normal distribution could also be used as a prior for multinomial distributions.
|
2018-02-23 15:48:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8315898180007935, "perplexity": 564.7879597873474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814801.45/warc/CC-MAIN-20180223154626-20180223174626-00465.warc.gz"}
|
https://www.graduate.technion.ac.il/Theses/Abstracts.asp?Id=33837
|
M.Sc Student M.Sc Thesis Tur Nitzan The Metric Relaxation for 0-Extension Admits an Omega(log (2/3)k) Gap Department of Computer Science ASSOCIATE PROF. Roy Schwartz
Abstract
We consider the 0-Extension problem, where we are given an undirected graph G=(V,E) equipped with non-negative edge weights w: E -> R, a collection T=t_1,?,t_k of k special vertices from V called terminals, and a semi-metric D over T.
The goal is to assign every non-terminal vertex to a terminal while minimizing the sum over all edges of the weight of the edge multiplied by the distance in D between the terminals to which the endpoints of the edge are assigned.
0-Extension admits two known algorithms, achieving approximations of O(log(k)) in Calinescu-Karloff-Rabani (SICOMP '05) and O(log(k)/log(log(k))) in Fakcharoenphol-Harrelson-Rao-Talwar (SODA '03).
Both known algorithms are based on rounding a natural linear programming relaxation called the metric relaxation, in which D is extended from T to the entire of V.
The current best known integrality gap for the metric relaxation is Ω(log^1/2)(k).
In this work we present an improved integrality gap of Ω(log^2/3)(k) for the metric relaxation.
Our construction is based on the randomized extension of one graph by another, a notion that captures lifts of graphs as a special case and might be of independent interest.
Inspired by algebraic topology, our analysis of the gap instance is based on proving no continuous section (in the topological sense) exists in the randomized extension.
|
2022-08-15 12:47:01
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8139610886573792, "perplexity": 1236.5678101191259}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572174.8/warc/CC-MAIN-20220815115129-20220815145129-00597.warc.gz"}
|
http://gmatclub.com/forum/between-1977-and-1989-the-percentage-of-income-paid-to-126703.html?fl=similar
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 09 Feb 2016, 10:12
# Happening Now:
Wharton R2 Interview Invites | Join Chat Room for Live Updates
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Between 1977 and 1989, the percentage of income paid to
Author Message
TAGS:
Senior Manager
Status: May The Force Be With Me (D-DAY 15 May 2012)
Joined: 06 Jan 2012
Posts: 291
Location: India
Concentration: General Management, Entrepreneurship
Followers: 2
Kudos [?]: 165 [1] , given: 16
Between 1977 and 1989, the percentage of income paid to [#permalink] 29 Jan 2012, 01:54
1
KUDOS
7
This post was
BOOKMARKED
00:00
Difficulty:
95% (hard)
Question Stats:
44% (02:43) correct 56% (02:03) wrong based on 396 sessions
Between 1977 and 1989, the percentage of income paid to Federal taxes by the richest one percent of Americans decreased, from 40 percent to 25 percent. By the end of that same period, however, the richest one percent of Americans were paying a larger proportion of all Federal tax revenues, from 12.7 percent in 1977 to 16.2 percent in 1989.
Which of the following, if true, contributes most to an explanation of the discrepancy described above?
A. Between 1977 and 1989, the Internal Revenue Service increased the percentage of its staff members responsible for audits and tax collection.
B. Between 1977 and 1989, the before-tax income of the richest one percent of Americans increased by over 75 percent when adjusted for inflation.
C. Between 1977 and 1989, many of the richest one percent of Americans shifted their investments from untaxable to taxable assets.
D. Between 1977 and 1989, the top tax rate was reduced from 70 percent to 31 percent and several tax loopholes were eliminated.
E. Between 1977 and 1989, the amount of Federal taxes paid by the richest one percent of Americans increased by $45 billion, while the amount paid by all Americans rose by$50 billion.
[Reveal] Spoiler: OA
_________________
Giving +1 kudos is a better way of saying 'Thank You'.
Magoosh GMAT Instructor
Joined: 28 Dec 2011
Posts: 2787
Followers: 942
Kudos [?]: 3967 [7] , given: 44
Re: Federal taxes by the richest one percent of Americans [#permalink] 29 Jan 2012, 12:09
7
KUDOS
Expert's post
3
This post was
BOOKMARKED
Hi, there. I'm happy to help with this.
CR questions with percents are particularly tricky, because you have to ascertain, in each instant, what is a percent of what?
So, here's the prompt again: "Between 1977 and 1989, the percentage of income paid to Federal taxes by the richest one percent of Americans decreased, from 40 percent to 25 percent. By the end of that same period, however, the richest one percent of Americans were paying a larger proportion of all Federal tax revenues, from 12.7 percent in 1977 to 16.2 percent in 1989."
So, the first percentage ---- "percent of income paid to Federal taxes" for the 1% --- that means, each one-percenter person went from paying 40% of his income in 1977 to 25% of his income in 1989 (that would have been Mr. Reagan's policies). That is only a percent --- if my income went, say, from $1 million in 1977 to$10 million in 1989, then I would have paid 40% of $1 million =$400,000 in taxes in 1977, and I would have paid 25% of $10 million, or$2.5 million in taxes in 1989. In other words, if the overall amount of income has increase, the dollar amount of taxes paid can increase even if it's a smaller percent of the income.
The second percent: "a larger proportion of all Federal tax revenues" --- so, here, if you look at the big pie, the total dollar amount that the IRS rakes in each year, then the share contributed by the 1% would have increased. In other words, the rich got richer, and the poor got poorer: an oft-repeated description of the 1980s.
The basic way to explain this discrepancy: the dollar amount of income of each 1% person had wildly increased. That's something you need to have sorted out before you look at the answer choices.
Now, the choices.
A. Between 1977 and 1989, the Internal Revenue Service increased the percentage of its staff members responsible for audits and tax collection.
Almost wholly irrelevant --- audits might account for a little more money here and there, but not for a massive increase in revenue.
B. Between 1977 and 1989, the before-tax income of the richest one percent of Americans increased by over 75 percent when adjusted for inflation.
Bingo! Just what we suspected.
C. Between 1977 and 1989, many of the richest one percent of Americans shifted their investments from untaxable to taxable assets.
First of all, there's no historical evidence for that, but think about it. If the tax rate declines, maybe some 1% folks move a little more from tax shelters to something taxable, but it simply doesn't make sense that person would move so much that the overall dollar amount in taxes paid increases to more than what it was at the higher tax rate. It's unlikely one person would do that, and it's unimaginable that "many" of the 1% would do something so daft.
D. Between 1977 and 1989, the top tax rate was reduced from 70 percent to 31 percent and several tax loopholes were eliminated.
Lower taxes, doesn't explain more tax income from those folks. Eliminating loopholes -- may result in some more income, but a result in a substantially larger slice of the Federal pie? Unlikely.
E. Between 1977 and 1989, the amount of Federal taxes paid by the richest one percent of Americans increased by $45 billion, while the amount paid by all Americans rose by$50 billion.
Here we have dollar amount increases. For the 1%, we have the percentage increase, so we could figure out the before & after dollar amounts. For the 99%, we have no info --- no way to figure out the dollar amounts before or after. To borrow DS language, this choice gives us "insufficient" information.
Again, it's very important to have a clear idea of what the discrepancy is and what would resolve it before you start analyzing answer choices.
Does that make sense? Please let me know if you have any questions on what I've said.
Mike
_________________
Mike McGarry
Magoosh Test Prep
Intern
Joined: 03 Dec 2010
Posts: 22
Followers: 0
Kudos [?]: 2 [0], given: 0
Re: Federal taxes by the richest one percent of Americans [#permalink] 14 Mar 2012, 05:09
Hi Mike,
You are right with what ou said in the end about understanding the discrepancy and look at ans choices. I read the stimulus, but couldn't really figure out the discrepancy. I moved on with options, i figured out with B or C as correct option, but i marked C. Quite often I end up selecting the wrong ans choice among 2 options, what do you suggest. I dont want to work on a CR qustn for more than 2 min.
Thanks,
Magoosh GMAT Instructor
Joined: 28 Dec 2011
Posts: 2787
Followers: 942
Kudos [?]: 3967 [0], given: 44
Re: Federal taxes by the richest one percent of Americans [#permalink] 14 Mar 2012, 10:56
Expert's post
1
This post was
BOOKMARKED
priyalr wrote:
Hi Mike,
You are right with what ou said in the end about understanding the discrepancy and look at ans choices. I read the stimulus, but couldn't really figure out the discrepancy. I moved on with options, i figured out with B or C as correct option, but i marked C. Quite often I end up selecting the wrong ans choice among 2 options, what do you suggest. I dont want to work on a CR qustn for more than 2 min.
Thanks,
Thank you for your kind words. Here's a free video lesson about CR questions that you may find helpful.
http://gmat.magoosh.com/lessons/579-dis ... n-argument
At Magoosh, we have a series of video lessons to guide your approach to CR questions on the GMAT.
I hope that's helpful. Let me know if you have any further questions.
Mike
_________________
Mike McGarry
Magoosh Test Prep
Senior Manager
Joined: 12 Mar 2012
Posts: 369
Concentration: Operations, Strategy
Followers: 2
Kudos [?]: 139 [0], given: 31
Re: Federal taxes by the richest one percent of Americans [#permalink] 14 Mar 2012, 23:09
Hey Mike,
Thats a brilliant prephrasing above you did.
I was lost and landed on C.
Basically i could not decipher the correct meaning thrown upon by the argument. Could you please suggest what shall i do, Mostly the 700-800 range problems bounce on me because I could hardly figure out what's going on.
Could you recommend any advise on this. It will be of great help.
thanx
_________________
Practice Practice and practice...!!
If there's a loophole in my analysis--> suggest measures to make it airtight.
Magoosh GMAT Instructor
Joined: 28 Dec 2011
Posts: 2787
Followers: 942
Kudos [?]: 3967 [1] , given: 44
Re: Federal taxes by the richest one percent of Americans [#permalink] 15 Mar 2012, 12:53
1
KUDOS
Expert's post
1
This post was
BOOKMARKED
I was lost and landed on C. Basically i could not decipher the correct meaning thrown upon by the argument. Could you please suggest what shall i do, Mostly the 700-800 range problems bounce on me because I could hardly figure out what's going on.
Could you recommend any advise on this. It will be of great help.
thanx
Thank you for your kind words. Mastering the high levels of CR is indeed a difficult task. I would recommend checking out Magoosh. We have 200+ GMAT lesson videos, including a whole series on mastering the CR. Here's a sample:
http://gmat.magoosh.com/lessons/579-dis ... n-argument
We have 800+ practice questions, each with its own video explanation. Here's a sample:
http://gmat.magoosh.com/questions/1309
After you submit your answer to that question, the following page will have the video explanation. I believe the strategies we teach might be just what you need to crack the ceiling you are current hitting on CR. You may also find this blog article helpful.
http://magoosh.com/gmat/2012/arguments- ... -the-gmat/
Let me know if you have any questions.
Mike
_________________
Mike McGarry
Magoosh Test Prep
Senior Manager
Joined: 12 Mar 2012
Posts: 369
Concentration: Operations, Strategy
Followers: 2
Kudos [?]: 139 [0], given: 31
Re: Federal taxes by the richest one percent of Americans [#permalink] 15 Mar 2012, 20:47
thnx for suggestion i am going through the links. will let you know if problem still persists.
_________________
Practice Practice and practice...!!
If there's a loophole in my analysis--> suggest measures to make it airtight.
Manager
Joined: 28 Feb 2011
Posts: 58
GMAT 1: Q V
Followers: 0
Kudos [?]: 15 [0], given: 18
Re: Federal taxes by the richest one percent of Americans [#permalink] 26 Mar 2012, 00:49
fact 1 : percentage of income of richest 1 percent american decreased from 40 to 25 %. fact 2 : total proportion of richest 1 percent in revenues collected increased
considering these two facts it shows that income of richest 1 percent american increased. so IMO correct answer is B
Senior Manager
Joined: 22 Dec 2011
Posts: 298
Followers: 3
Kudos [?]: 175 [0], given: 32
Re: Federal taxes by the richest one percent of Americans [#permalink] 20 Oct 2012, 06:32
mikemcgarry wrote:
Hi, there. I'm happy to help with this.
CR questions with percents are particularly tricky, because you have to ascertain, in each instant, what is a percent of what?
So, here's the prompt again: "Between 1977 and 1989, the percentage of income paid to Federal taxes by the richest one percent of Americans decreased, from 40 percent to 25 percent. By the end of that same period, however, the richest one percent of Americans were paying a larger proportion of all Federal tax revenues, from 12.7 percent in 1977 to 16.2 percent in 1989."
So, the first percentage ---- "percent of income paid to Federal taxes" for the 1% --- that means, each one-percenter person went from paying 40% of his income in 1977 to 25% of his income in 1989 (that would have been Mr. Reagan's policies). That is only a percent --- if my income went, say, from $1 million in 1977 to$10 million in 1989, then I would have paid 40% of $1 million =$400,000 in taxes in 1977, and I would have paid 25% of $10 million, or$2.5 million in taxes in 1989. In other words, if the overall amount of income has increase, the dollar amount of taxes paid can increase even if it's a smaller percent of the income.
The second percent: "a larger proportion of all Federal tax revenues" --- so, here, if you look at the big pie, the total dollar amount that the IRS rakes in each year, then the share contributed by the 1% would have increased. In other words, the rich got richer, and the poor got poorer: an oft-repeated description of the 1980s.
The basic way to explain this discrepancy: the dollar amount of income of each 1% person had wildly increased. That's something you need to have sorted out before you look at the answer choices.
C. Between 1977 and 1989, many of the richest one percent of Americans shifted their investments from untaxable to taxable assets.
First of all, there's no historical evidence for that, but think about it. If the tax rate declines, maybe some 1% folks move a little more from tax shelters to something taxable, but it simply doesn't make sense that person would move so much that the overall dollar amount in taxes paid increases to more than what it was at the higher tax rate. It's unlikely one person would do that, and it's unimaginable that "many" of the 1% would do something so daft.
Hi Mike - I'm a bit lost with option C
1977: 40% of $100 1989: 25% of ($100 + $1 Billion) (the 1B$ is the shifted $amt from untaxable to taxable. then we can account for the 12.7 percent in 1977 to 16.2 percent in 1989 increase right, even though the tax rate was low? Could you please correct my reasoning? Intern Joined: 13 May 2012 Posts: 30 Followers: 0 Kudos [?]: 3 [0], given: 0 Re: Federal taxes by the richest one percent of Americans [#permalink] 20 Oct 2012, 12:30 Jp27 wrote: mikemcgarry wrote: Hi, there. I'm happy to help with this. CR questions with percents are particularly tricky, because you have to ascertain, in each instant, what is a percent of what? So, here's the prompt again: "Between 1977 and 1989, the percentage of income paid to Federal taxes by the richest one percent of Americans decreased, from 40 percent to 25 percent. By the end of that same period, however, the richest one percent of Americans were paying a larger proportion of all Federal tax revenues, from 12.7 percent in 1977 to 16.2 percent in 1989." So, the first percentage ---- "percent of income paid to Federal taxes" for the 1% --- that means, each one-percenter person went from paying 40% of his income in 1977 to 25% of his income in 1989 (that would have been Mr. Reagan's policies). That is only a percent --- if my income went, say, from$1 million in 1977 to $10 million in 1989, then I would have paid 40% of$1 million = $400,000 in taxes in 1977, and I would have paid 25% of$10 million, or $2.5 million in taxes in 1989. In other words, if the overall amount of income has increase, the dollar amount of taxes paid can increase even if it's a smaller percent of the income. The second percent: "a larger proportion of all Federal tax revenues" --- so, here, if you look at the big pie, the total dollar amount that the IRS rakes in each year, then the share contributed by the 1% would have increased. In other words, the rich got richer, and the poor got poorer: an oft-repeated description of the 1980s. The basic way to explain this discrepancy: the dollar amount of income of each 1% person had wildly increased. That's something you need to have sorted out before you look at the answer choices. C. Between 1977 and 1989, many of the richest one percent of Americans shifted their investments from untaxable to taxable assets. First of all, there's no historical evidence for that, but think about it. If the tax rate declines, maybe some 1% folks move a little more from tax shelters to something taxable, but it simply doesn't make sense that person would move so much that the overall dollar amount in taxes paid increases to more than what it was at the higher tax rate. It's unlikely one person would do that, and it's unimaginable that "many" of the 1% would do something so daft. Hi Mike - I'm a bit lost with option C 1977: 40% of$100
1989: 25% of ($100 +$1 Billion) (the 1B $is the shifted$ amt from untaxable to taxable.
then we can account for the 12.7 percent in 1977 to 16.2 percent in 1989 increase right, even though the tax rate was low?
Could you please correct my reasoning?
C just says "*many* of the richest 1 percent" shifted their assets from non-taxable to taxable. What was the amount of this new contribution (relative to the government's total revenue from taxes)? We don't know. "Many of the richest 1%" can mean 10% of the 1%, so if the 1% is 300 people, then 30 people. We don't know how much these 30 people's added contribution accounts for.
Manager
Status: Re-take.. The OG just loves me too much.
Joined: 18 Jun 2012
Posts: 67
Location: India
GMAT 1: 600 Q44 V29
WE: Information Technology (Consulting)
Followers: 4
Kudos [?]: 32 [0], given: 48
Re: Between 1977 and 1989, the percentage of income paid to [#permalink] 20 Oct 2012, 21:42
phew!! tricky question this!! federal taxes, federal revenues, percentage increase, percentage decrease!! too many confusing words..
_________________
Live Life the Way YOU Love It !!
GmatPrep1 [10/09/2012] : 650 (Q42;V38) - need to make lesser silly mistakes.
MGMAT 1 [11/09/2012] : 640 (Q44;V34) - need to improve quant pacing and overcome verbal fatigue.
Magoosh GMAT Instructor
Joined: 28 Dec 2011
Posts: 2787
Followers: 942
Kudos [?]: 3967 [1] , given: 44
Re: Federal taxes by the richest one percent of Americans [#permalink] 22 Oct 2012, 11:38
1
KUDOS
Expert's post
Jp27 wrote:
Hi Mike - I'm a bit lost with option C
1977: 40% of $100 1989: 25% of ($100 + $1 Billion) (the 1B$ is the shifted $amt from untaxable to taxable. then we can account for the 12.7 percent in 1977 to 16.2 percent in 1989 increase right, even though the tax rate was low? Could you please correct my reasoning? Dear Jp27, Think about it this way. We are being asked for an "explanation of the discrepancy." An explanation is something that makes something clear --- explanations resolve uncertainty & ambiguity --- that's the job of an explanation. Choice (C) contains that beautifully vague word "many" --- how many? a majority? almost all? or just a substantial minority? And then we have the question of --- what was the dollar amount of the investments that were shifted "from untaxable to taxable assets", and how does that compare in dollar amount to the assets that were already being taxed? Much more? much less? about equal? More uncertainty. We have been asked to explain something. A statement awash in uncertainty does not make a cogent explanation. Does all this make sense? Mike _________________ Mike McGarry Magoosh Test Prep Magoosh GMAT Instructor Joined: 28 Dec 2011 Posts: 2787 Followers: 942 Kudos [?]: 3967 [1] , given: 44 Re: Between 1977 and 1989, the percentage of income paid to [#permalink] 22 Oct 2012, 11:44 1 This post received KUDOS Expert's post thebigr002 wrote: phew!! tricky question this!! federal taxes, federal revenues, percentage increase, percentage decrease!! too many confusing words.. Dear thebigr002 My friend, if you are studying for the GMAT and headed for business school, you need to get used to arguments just like this. This argument is very typically, in scope and language, of what the GMAT will test. Furthermore, this is precisely the sort of information that managers need to process to make business decisions. If you are aiming for the MBA, hoping to pursue a career in business, you have to acclimate yourself to all these ideas until reading a passage like this is totally natural. I would strongly suggest reading the Economist magazine to build familiarity with fundamental economic ideas. I'll also recommend this series of blogs on real-life issues to understand for the GMAT. http://magoosh.com/gmat/2012/gmat-supply-and-demand/ Let me know if you have any further questions. Mike _________________ Mike McGarry Magoosh Test Prep Senior Manager Joined: 22 Dec 2011 Posts: 298 Followers: 3 Kudos [?]: 175 [0], given: 32 Re: Federal taxes by the richest one percent of Americans [#permalink] 22 Oct 2012, 19:49 mikemcgarry wrote: Jp27 wrote: Hi Mike - I'm a bit lost with option C 1977: 40% of$100
1989: 25% of ($100 +$1 Billion) (the 1B $is the shifted$ amt from untaxable to taxable.
then we can account for the 12.7 percent in 1977 to 16.2 percent in 1989 increase right, even though the tax rate was low?
Could you please correct my reasoning?
Dear Jp27,
Think about it this way. We are being asked for an "explanation of the discrepancy." An explanation is something that makes something clear --- explanations resolve uncertainty & ambiguity --- that's the job of an explanation.
Choice (C) contains that beautifully vague word "many" --- how many? a majority? almost all? or just a substantial minority? And then we have the question of --- what was the dollar amount of the investments that were shifted "from untaxable to taxable assets", and how does that compare in dollar amount to the assets that were already being taxed? Much more? much less? about equal? More uncertainty.
We have been asked to explain something. A statement awash in uncertainty does not make a cogent explanation.
Does all this make sense?
Mike
Mike - Yes this helps a lot and will help a lot on many such questions to come...
Thanks again. +1
Intern
Joined: 22 Oct 2012
Posts: 36
Concentration: Economics, Finance
GMAT Date: 11-08-2012
GPA: 2.67
Followers: 2
Kudos [?]: 38 [0], given: 14
Re: Federal taxes by the richest one percent of Americans [#permalink] 22 Oct 2012, 22:56
priyalr wrote:
Hi Mike,
You are right with what ou said in the end about understanding the discrepancy and look at ans choices. I read the stimulus, but couldn't really figure out the discrepancy. I moved on with options, i figured out with B or C as correct option, but i marked C. Quite often I end up selecting the wrong ans choice among 2 options, what do you suggest. I dont want to work on a CR qustn for more than 2 min.
Thanks,
From the discussion above, I think many of us consider option B & option C to be the two major contenders for the answer. However, some of us are, ultimately, falling for the wrong guy, option C. I think, in our discussion above, we have missed one crucial point, which is very apparent in the question passage and which is missed by option C.
Let me offer my two cents to the discussion.
As Mike has very brilliantly explained the paragraph of the question, the paragraph states two things about period 1977 to 1989:
1. The proportion of income paid to federal taxes by the richest (i.e. top 1%) people has decreased.
2. Proportion of federal taxes contributed by the richest has increased
Now, option C says that many of the richest people shifted their investments from non-taxable to taxable assets. Now, such a thing would have impact on both the above statements. Though such a move by the richest would help explain statement 2, it would run completely counter to statement 1, since this option talks about increased tax outflow from the richest, without suggesting any corresponding increase in the income of the richest. Thus, this option leads to increased proportion of income paid to taxes, which is opposite of statement 1.
I hope the above explanation removes option C from the race, leaving us with only option B.
If anyone finds any issues with this, I would be grateful if you point it out.
I would also like to use this forum to emphasize one very important point here. We should never use outside knowledge to judge the validity of any statement. The reason is simple that the critical reasoning questions are designed to test your reasoning skills, not your knowledge. They are not designed to favor people possessing certain knowledge (which, in this case, could be about the tax regime in 1980s in the US). Official GMAT website clearly states this in the test structure of GMAT.
I am emphasizing this because I see that Mike’s argument for not going for option C begins with “First of all, there's no historical evidence for that...”. This is an incorrect approach to attempt critical reasoning questions.
Even the second part of this argument is fallacious. The second part says that:
“If the tax rate declines, maybe some 1% folks move a little more from tax shelters to something taxable, but it simply doesn't make sense that person would move so much that the overall dollar amount in taxes paid increases to more than what it was at the higher tax rate. It's unlikely one person would do that, and it's unimaginable that "many" of the 1% would do something so daft. “
Looking closely at the argument, a question arises, why would even some people move some proportion of their assets, if the tax rate was lowered? No-tax is always better than lower tax. Why would some people do that?
They would do only when the after-tax return on taxable investments becomes greater than return on non-taxable income. So, when the tax rate is lowered to such an extent that after tax return on taxable assets become greater, people would start switching to them, thereby increasing tax revenues for the government. And in this case, they may shift their entire assets to taxable ones, which would possibly generate enough taxes for the government, to more than offset the decrease in tax rate.
_________________
If this post is helpful, think of giving KUDOS.
Verbal Forum Moderator
Status: Getting strong now, I'm so strong now!!!
Affiliations: National Institute of Technology, Durgapur
Joined: 04 Jun 2013
Posts: 622
Location: India
GPA: 3.32
WE: Information Technology (Computer Software)
Followers: 85
Kudos [?]: 384 [1] , given: 77
Re: Between 1977 and 1989, the percentage of income paid to [#permalink] 21 Nov 2013, 11:01
1
KUDOS
Expert's post
New Gmat club project
_________________
Regards,
S
Consider +1 KUDOS if you find this post useful
Last edited by dentobizz on 22 Nov 2013, 10:28, edited 1 time in total.
updated theory articles
GMAT Club Legend
Joined: 01 Oct 2013
Posts: 6551
Followers: 637
Kudos [?]: 132 [0], given: 0
Re: Between 1977 and 1989, the percentage of income paid to [#permalink] 05 Feb 2015, 00:08
Hello from the GMAT Club VerbalBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
Re: Between 1977 and 1989, the percentage of income paid to [#permalink] 05 Feb 2015, 00:08
Similar topics Replies Last post
Similar
Topics:
7 The percentage of households with an annual income of more 9 27 Mar 2014, 08:57
5 Between 1977 and 1989, the percentage of income paid to 8 26 Sep 2011, 19:09
The percentage of households with an annual income of more 4 17 Feb 2009, 17:12
The percentage of households with an annual income of more 0 07 Jan 2009, 13:39
The percentage of households with an annual income of more 0 26 Nov 2006, 17:12
Display posts from previous: Sort by
|
2016-02-09 18:12:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47786641120910645, "perplexity": 3107.0299037580594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157443.43/warc/CC-MAIN-20160205193917-00003-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://fenicsproject.org/olddocs/dolfin/1.3.0/python/programmers-reference/cpp/mesh/MeshTransformation_rotate.html
|
# MeshTransformation_rotate¶
dolfin.cpp.mesh.MeshTransformation_rotate(*args)
• rotate(mesh, angle, axis)
Rotate mesh around a coordinate axis through center of mass of all mesh vertices
Arguments
mesh (Mesh)
The mesh.
angle (float)
The number of degrees (0-360) of rotation.
axis (int)
The coordinate axis around which to rotate the mesh.
• rotate(mesh, angle, axis, p)
Rotate mesh around a coordinate axis through a given point
Arguments
mesh (Mesh)
The mesh.
angle (float)
The number of degrees (0-360) of rotation.
axis (int)
The coordinate axis around which to rotate the mesh.
point (Point)
The point around which to rotate the mesh.
|
2022-08-16 19:35:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3491113483905792, "perplexity": 10297.040060065188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572515.15/warc/CC-MAIN-20220816181215-20220816211215-00409.warc.gz"}
|
https://www.r-bloggers.com/2011/08/we-keep-breaking-records-so-what-get-statistical-perspective/
|
[This article was first published on Freakonometrics - Tag - R-english, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
This summer, we have been told that some financial series broke some records (here, in French)
For instance, the French CAC40 had negative return for 11 consecutive days (which has never been seen, so far).
> library(tseries)
> x<-get.hist.quote("^FCHI")
> Y=x$Close > Z=diff(log(Y)) > RUN=rle(as.character(Z>=0))$lengths
> n=length(RUN)
> LOSS=RUN[seq(2,n,by=2)]
> GAIN=RUN[seq(1,n,by=2)]
> TG=sort(table(GIN))
> TG[as.character(1:13)]
GAIN
1 2 3 4 5 6 7 8 9 <NA> <NA> <NA> 13
645 336 170 72 63 21 7 3 4 NA NA NA 1
> TL=sort(table(LOSS))
> TL[as.character(1:15)]
LOSS
1 2 3 4 5 6 7 8 9 <NA> 11 <NA> <NA>
664 337 186 68 42 14 5 3 1 NA 1 NA NA
> TR=sort(table(RUN))
> TR[as.character(1:15)]
RUN
1 2 3 4 5 6 7 8 9 <NA> 11 <NA> 13
1309 673 356 140 105 35 12 6 5 NA 1 NA 1
Indeed 11 consecutive days of negative returns is a record. But one should keep in mind the fact that the real records for runs is 13 consecutive days with positive returns…
But what does that mean ? Can we still assume time independence of log-returns (since today, a lot of financial models are still based on that assumption) ?
Actually. if financial series were time-independence, such a probability, indeed, should be rather small. At least on 11 or 10 runs. Something like
(assuming that each day, the probability to observe a negative return is 50%). But maybe not over 25 years (6250 trading days): the probability to observe a sub-sequence of 10 consecutive negative value (with daily probability of one half) over 6250 observations will be much larger. My guess is that is would be
where at the numerator we have the number of favourable cases over the total number of cases. At the numerator, the first number the number of cases where the first 10 (at least) are negative, then for the second one, we count the number of cases where the first is positive, then the next 10 (at least) are negative (and then the second is positive and then the next 10 are negative, the third is positive etc). For those interested by more details (and a more general formula on runs), an answer can be found here.
But note that the probability is quite large… So it is not that unlikely to observe such a sequence over 25 years.
A classical idea when looking at time series is to look at the autocorrelation function of the returns,
which might suggest that there is no correlation with past returns. But it should be possible to do more advanced tests.
On the CAC40 series, we can run an independence run test on the latest 100 consecutive days, and look at the p-value,
> library(lawstat)
> u=as.vector(Z[(n-100):n])
> runs.test(u,plot=TRUE)
Runs Test - Two sided
data: u
Standardized Runs Statistic = -0.4991, p-value = 0.6177
The B‘s here are returns lower than the median (almost null, so they might be considered as negative returns). With such a high p-value, we accept the null hypothesis, i.e. time independence.
If we consider a moving-time window
we can see that we accept the assumption of independence most the the time.
Actually, here, the time window is 100 days (+/- 50 days). But it is possible to consider 200 days,
or even 400 days,
So, except if we focus on 2006, it looks like we should reject the idea of time dependence in financial markets.
It is also possible to look more carefully at the distribution of runs, and to compare it with the case of independent samples (here we consider monte carlo generation of sequences having the same size),
> m=length(Z)
> ns=100000
> HIST=matrix(NA,ns,15)
> for(j in 1:ns){
+ XX=sample(c("A","B"),size=m,replace=TRUE)
+ RUNX=rle(as.character(XX))\$lengths
+ S=sort(table(RUNX))
+ HIST[j,]=S[as.character(1:15)]
+ }
> meana=function(x){sum(x[is.na(x)==FALSE])/length(x)}
> cbind(TR[as.character(1:15)],apply(HIST,2,meana),
+ round(m/(2^(1+1:15))))
[,1] [,2] [,3]
1 1309 1305.12304 1305
2 673 652.46513 652
3 356 326.21119 326
4 140 163.05101 163
5 105 81.52366 82
6 35 40.74539 41
7 12 20.38198 20
8 6 10.16383 10
9 5 5.09871 5
10 NA 2.56239 3
11 1 1.26939 1
12 NA 0.63731 1
13 1 0.31815 0
14 NA 0.15812 0
15 NA 0.08013 0
The first column above is the empirical frequency of runs of length 1,2,3, etc. The second one is the average frequencies obtained on random simulation of independent sample. The third one is the theoretical frequency based on a (geometric distribution with mean 1).
Here again, it looks like our time series behave like an independent sample. Here is also a nice paper by Mark Schilling on the longest run of heads.
So it is not that odd to observe such a series of losses on financial markets….
To leave a comment for the author, please follow the link and comment on their blog: Freakonometrics - Tag - R-english.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
# Never miss an update! Subscribe to R-bloggers to receive e-mails with the latest R posts.(You will not see this message again.)
Click here to close (This popup will not appear again)
|
2021-09-26 19:09:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5239384770393372, "perplexity": 1290.0314837128344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057913.34/warc/CC-MAIN-20210926175051-20210926205051-00433.warc.gz"}
|
https://www.speedsolving.com/threads/which-3x3x3-positions-require-at-least-20-moves-to-solve.43928/
|
# Which 3x3x3 positions require at least 20 moves to solve?
#### learypost
##### Member
I know that any rubik's cube position can be solved in 20 moves or less, but which are the allowable moves? I'm assuming it includes FBUDRL and all of their inverses (i.e., F'B'U' etc.). What about F2B2U2...? What about fbudrl and their inverses and doubles? And what about xyz? Thanks.
EDIT: I'm writing a computer program which will take a position and recursively try every move until it finds a solution which is 20 moves or less. So if it can be done without, e.g., fbudrl, then that would reduce the number of steps by ~20^6, which would of course be hugely helpful. Hopefully, by using a minimal set of moves, the algorithm will be able to find a solution within a reasonable amount of time (i.e., within a few days).
Last edited:
#### Tim Major
##### Platinum Member
rotations aren't needed for computer solutions. All moves are permitted in "optimal" searches. One thing you didn't mention is slice moves, I assume they're 2 moves though not entirely sure.
#### Ollie
##### Member
R, R', R2 = one move
same for all other faces
M, M', M2 = two moves edit: ninja'd
xyz = not counted as moves
#### kunparekh18
##### Member
All single layer non-slice moves, I guess.
#### Stefan
##### Member
the algorithm will be able to find a solution within a reasonable amount of time (i.e., within a few days).
It won't.
#### Cubenovice
##### Forever Slow
It won't.
Maybe the OP has developed the fastest supercomputer ever?
#### Jakube
##### Member
Like the others said, allowed moves are U, U2, U', R, R2, R', F, F2, F', L, L2, L', B, B2, B', D, D2, D'
by the way: with 6 move you don't have 20^6 possibilities, you have 6^20, which is a bit more
But because you have 18 possible moves, you have ~18^20 possible solutions. You can reduce this number by ignoring solutions like FUU2L (this is FU'L), or LRL' (this is just R), ...
Hopefully, by using a minimal set of moves, the algorithm will be able to find a solution within a reasonable amount of time (i.e., within a few days).
Jaap explains important stuff as transition tables or pruning tables. This can speed up your program a lot.
#### learypost
##### Member
Like the others said, allowed moves are U, U2, U', R, R2, R', F, F2, F', L, L2, L', B, B2, B', D, D2, D'
by the way: with 6 move you don't have 20^6 possibilities, you have 6^20, which is a bit more
But because you have 18 possible moves, you have ~18^20 possible solutions. You can reduce this number by ignoring solutions like FUU2L (this is FU'L), or LRL' (this is just R), ...
Jaap explains important stuff as transition tables or pruning tables. This can speed up your program a lot.
Thanks. And ya, I meant 6^20. I thought maybe if only FBUDRL were considered valid moves, then I might be able to do it. But if the 20 total moves includes 18 different transformations then there's no way. I'll see what I can do with those tables though.
#### Stefan
##### Member
Thanks. And ya, I meant 6^20. I thought maybe if only FBUDRL were considered valid moves, then I might be able to do it. But if the 20 total moves includes 18 different transformations then there's no way. I'll see what I can do with those tables though.
Even if it only were only 6^20 and you could do 10^9 per second then it would be 42 days omg that can't be a coincidence.
#### Dapianokid
##### Member
The minimum number of moves required to solve the cube is zero. Theorized (and proven) lower and upper bounds for the {,} is 0 face turns, 0 rotations, and 0 slice moves. This means that the cube can be solved without F, U, B, L, R, or D turns, as well!
#### Renslay
##### Member
EDIT: I'm writing a computer program which will take a position and recursively try every move until it finds a solution which is 20 moves or less. So if it can be done without, e.g., fbudrl, then that would reduce the number of steps by ~20^6, which would of course be hugely helpful. Hopefully, by using a minimal set of moves, the algorithm will be able to find a solution within a reasonable amount of time (i.e., within a few days).
Doing a simple recursive search (breadth-first search for example) on the cube is really, really, REALLY inefficient, time- and memory consuming. Even for supercomputers! You would need not giga, not tera, not even peta, but at leasts exabytes (millions of terabytes) of memory. Breadth-first search uses memory propotional to the number of the vertices! (In this case, the number of the states of the cube.) And we didn't even talk about the running time, which is again propotional to the number of the states.
One of the most efficient way for searching optimal solutions bases on iterative deepening depth-first search, using heuristics (IDA*), prunning tables and movements tables. Look up in http://www.jaapsch.net/puzzles/compcube.htm, examine Kociemba's Two Phase algorithm, the Standard Optimal Solver (SOS) and Kociemba's Huge Optimal Solver (HOS).
http://kociemba.org/cube.htm
Also there are implementations of Kociemba's Two Phase algorithm in Java, Mathematica (see links above), and I wrote it once in Matlab, based on the Mathematica code.
EDIT:
I suggest you should start with the implementation of Thistlethwaite's algorithm. Very easy, using only pre-generated lookup tables for solving the cube (in less than 45 moves guaranteed! 33 in average if I remember well.)
Last edited:
#### Ollie
##### Member
The minimum number of moves required to solve the cube is zero. Theorized (and proven) lower and upper bounds for the {,} is 0 face turns, 0 rotations, and 0 slice moves. This means that the cube can be solved without F, U, B, L, R, or D turns, as well!
One case. So what?
#### Renslay
##### Member
One case. So what?
Sarcasm for "minimal number of moves for a state". Ignore.
#### Dapianokid
##### Member
If you can store each cube state in one byte, it would take more than 43 billion gigabytes.
I am part of a computer/math geek community and we were once trying to solve any cube optimally with a Ti-83+ calculator in a reasonable amount of time for fun, and the longest part of the program was decompressino of our extremely compressed cubestate tables and data. It takes an absolute minimum of 3 bytes to store the solved state with the variable data storage system suggested by the propser of the challenge, and the most "complicated" states took 6 bytes. WE went with a system that stored them all in 6 bytes for speed.
6*43 = 258. 258 billion billion bytes is not even remotely plausible without a massive network of the several dozen of the world's fastest and biggest (in memory capacity) supercomputers AND high-thruput networks all devoting 100% of their power towards just storing the stuff in memory.
I'm not even going to attempt to give you an educated estimate as to how long it would take for a hypothetical superduperovar9000computer to optimally search through all paths to every state. The number of cube paths that lead to states that are even 15 (not to mention 20! that's 5 moves extra!) turns away from solved hasn't even been calculated, and I'm guessing becuase that number would be gargantuan.
#### Renslay
##### Member
Which reminds me a puzzle:
Assume a look-up table, where you can find every cube state's optimal solution. So, the table has 4.9*10^19 rows; and assume that for a given cube I can immediately tell you it's number in the table. Also assume that one look-up requires no time.
So, here is a scramble R U F' B2 (...) R, let's say it's number is 12218. Then the 12218-th element of the table leads me to the optimal solution of this particular scramble.
And here is the fun question: each row can be stored only in two bits! (Yes, 2 bit, or 1/4 byte). So the full table requires "only" 2*4.9*10^19 bits, which is nearly 9.4 exabytes or 9.8 million terabytes. How?
Note: In theory, such a table is easy to construct, as well as the "cube state to number" function. Only the computational time would be astronomical to construct the table, not to mention the memory storage. But once we would have such a table, it would be easy to search up any scramble's optimal solution immediately.
Last edited:
#### Stefan
##### Member
And here is the fun question: each row can be stored only in two bits! (Yes, 2 bit, or 1/4 byte).
About 1.585 bits should suffice as well.
#### Renslay
##### Member
About 1.585 bits should suffice as well.
True. log[SUB]2[/SUB](3) to be precise (so the table requires about 7.4 exabytes). But I don't know how can you manage it computationally / algorithmically.
#### Stefan
##### Member
True. log[SUB]2[/SUB](3) to be precise (so the table requires about 7.4 exabytes). But I don't know how can you manage it computationally / algorithmically.
There's Arithmetic coding which I think would give you the optimum, but I don't know how complicated random access is. More realistically, you could cover five states in one byte which would be 1.6 bits per state. Or 111 states in 22 bytes, etc:
Code:
states bytes bits/state overhead over log_2(3)
5 1 1.6000000000 0.0150374993
111 22 1.5855855856 0.0006230849
217 43 1.5852534562 0.0002909555
323 64 1.5851393189 0.0001768182
429 85 1.5850815851 0.0001190844
535 106 1.5850467290 0.0000842283
641 127 1.5850234009 0.0000609002
747 148 1.5850066934 0.0000441927
853 169 1.5849941383 0.0000316376
959 190 1.5849843587 0.0000218580
1065 211 1.5849765258 0.0000140251
1171 232 1.5849701110 0.0000076103
1277 253 1.5849647612 0.0000022604
3937 780 1.5849631699 0.0000006692
Python 3:
from math import *
print('states bytes bits/state overhead over log_2(3)')
minBitsPerState = 2
for b in range(1, 1000):
states = floor(log(2**(8*b))/log(3))
bps = (8*b) / states
if bps < minBitsPerState:
print('{:6} {:7} {:.10f} {:.10f}'.
format(states, b, bps, bps-log(3)/log(2)))
minBitsPerState = bps
But five states in one byte looks the most reasonable as it's simple and gets pretty close to optimal already (less than 1% overhead).
Last edited:
#### Renslay
##### Member
There's Arithmetic coding which I think would give you the optimum, but I don't know how complicated random access is. More realistically, you could cover five states in one byte which would be 1.6 bits per state. Or 111 states in 22 bytes, etc
Sounds interesting.
I found a paper entitled "Random access decompression using binary arithmetic coding". Unfortunetly it's not free to read, but based on its title and abstract, it could be the solution to the log[SUB]2[/SUB]3 storage.
|
2019-11-20 22:41:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5745208263397217, "perplexity": 1923.6228519944138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670635.48/warc/CC-MAIN-20191120213017-20191121001017-00223.warc.gz"}
|
https://www.rocketryforum.com/threads/guns-rockets.69031/page-70
|
# GUNS & ROCKETS
### Help Support The Rocketry Forum:
#### Steve Shannon
##### Well-Known Member
TRF Supporter
My reason for buying Blazer ammo in the first place was twofold:
1. It was priced lower than everything else, and
2. My conscience wouldn’t bother me if I didn’t pick up my brass.
Unfortunately, the Blazer aluminum didn’t have enough impulse to reliably operate the roller locked HK94. It works fine in the MPX though. Fortunately, I bought some Privi Partizan for $145/1000. I just wish I bought 10,000. #### DRAGON64 ##### Well-Known Member TRF Supporter I found yet another rabbit hole to go down... Cowboy Action Shooting! How and why is the rabbit hole, but I ended up at my first match this past weekend, as a visitor only. I attended a monthly shoot for the North Alabama Regulators and had the most fun I have ever had as a non-participant. But I learned a lot and made a number of connections. The event started with the Pledge of Allegiance and a prayer... The match site was set up like an old west town... And the weapons are just old west bad-assedness I do not have any firearms or leather just yet, but I do have an alias and an SASS membership... By alias, every shooter has to have an old west alias, just like "Billy The Kid" etc. I am Maricopa John! #### jd2cylman ##### Still not Carl... ;-) TRF Supporter I found yet another rabbit hole to go down... Cowboy Action Shooting! How and why is the rabbit hole, but I ended up at my first match this past weekend, as a visitor only. I attended a monthly shoot for the North Alabama Regulators and had the most fun I have ever had as a non-participant. But I learned a lot and made a number of connections. The event started with the Pledge of Allegiance and a prayer... The match site was set up like an old west town... And the weapons are just old west bad-assedness I do not have any firearms or leather just yet, but I do have an alias and an SASS membership... By alias, every shooter has to have an old west alias, just like "Billy The Kid" etc. I am Maricopa John! Our neighbor hosts cowboy mounted shooting contest shows. Two six guns shooting 12 balloons in a pattern on a horse timed event. Fun to watch. All contestants must be in full cowboy/cowgirl dress. Talk about an expensive hobby... Horses are bad enough, but then add a pair of six shooters, outfits, ammunition (blanks, but still), and truck & trailer plus fuel to get to events. Someone was telling me they had over$1000.00 in custom tooled holsters.
But it did look like a fun time.
#### manixFan
##### Not a rocket scientist
Cowboy Action Shooting (CAS) is really a lifestyle for a lot of folks that get into it. There is a whole community of folks that make clothes and other gear, and many end up making their own costumes. I bought a really nice holster and belt rig to use as a prop when photographing Colt SAAs from a company that makes them for CAS shooters and it was nearly \$300. And that was 'unfinished', meaning I had to antique it myself to get the look I wanted. Then there are the guns themselves. For regular CAS you need three - a side by side shotgun (or an early design pump action), a lever action rifle, and a single action revolver. Many reload their own ammo to get the right loads for competition.
Since you have to dress up and act in character, it really draws out a lot of interesting folks. Often entire families get involved and attend, all in period garb and with names to match. It is definitely not for the faint of wallet if you really get into it. But it can be a lot of fun to watch and to interact with the participants and their families.
And then there are those who use black powder, which is a rabbit hole in a rabbit hole.
Tony
#### rfjustin
##### Well-Known Member
I'd love to see a CAS match, looks like great fun! Not my cup of tea to get into, but looks like a great way to kill a day or two as a spectator!
#### rfjustin
##### Well-Known Member
6 or 7 dudes, lots of polymer, glass, and steel in pic below.... 1000+ yards on private property... Great time...
#### FredA
##### Well-Known Member
Wow - people actually do shoot beyond 300yards. Nice
#### Michael L
##### Random Pixel Generator
TRF Supporter
All the time
I didn't know there was a gun section (makes sense)
.338 Edge at the 300 yd range. .338 Edge is a .300 Ultramag necked up to .338. Defensive Edge (Shawn Carlock for anyone that knows long range hunting) built this one. It's kind of a .338 Lapua. It shoots very sub-minute groups. That's my old .30-30 Winchester in the foreground. My dad bought it used in his 20's. He'd be in in his 90's if he were alive today. I took it to remind myself how bad I am with open sites.
Speaking of very sub-minute groups
100 yards. That's a 1/2" square (.22 LR target)
300 yard testing of the .338 Edge.
How not to dial the scope for 700 yards. I was so disappointed in myself for botching this. It was an easy 700 yard shot and I forgot to account for the down angle.
.20 Beggs (pea shooter). Modeled after the Edge. Both rifles have a Bat action and Hart barrel. Same Nightforce NXS scope as well
100 yard test target that Shawn sends with his builds. Back when I bought the rifle I was hung up on heavier Berger bullets so I never got this kind of accuracy. When he sent the powder / bullet / charge combo for the .338 I followed it and got the same results that he did. I'll get back on load developing for the .20 one day.
Ladder testing the pea shooter at 200 yards. I picked Group B for the rest of the loads. Either of them would work. I don't remember if I shot it at 300 to pick between these 3
This is a ladder test of my Remington 700 TAC in .223. I bought the .20 Beggs to replace it. It didn't No photos of the rifle unfortunately
Found a crappy shot of the 700 TAC (foreground). That's a 10 year old kid shooting my .338. Its a very easy shooter. Too easy. See that nearly empty blue box just under the rifle? It holds 50. It was full when we got there.
TRF Supporter
#### Dotini
##### Well-Known Member
TRF Supporter
What do guns, rockets and racing cars all have in common? Loud noises and stinking smoke!
I know, because I've done all of these activities off and on my whole life.
#### judo
##### Well-Known Member
I have an idea to go to an Appleseed shoot one day.
#### Banzai88
##### Lvl 1,Wallet....Destroyed
TRF Supporter
6 or 7 dudes, lots of polymer, glass, and steel in pic below.... 1000+ yards on private property... Great time...
View attachment 460353
Surprised to see so many sticks and not a single can.
Once you start shooting suppressed, everything else is barbaric!
#### rfjustin
##### Well-Known Member
Surprised to see so many sticks and not a single can.
Once you start shooting suppressed, everything else is barbaric!
Land was in IL... case closed....
#### Michael L
##### Random Pixel Generator
TRF Supporter
Cans and 1000+ yards don't mix well
#### FredA
##### Well-Known Member
Cans and 1000+ yards don't mix well
You would be surprised....
What size targets are you hitting at 1000yds?
What are your first-hit rates on those target?
#### dhbarr
##### Amateur Professional
You all should go to a benchrest match sometime.
#### Michael L
##### Random Pixel Generator
TRF Supporter
You would be surprised....
What size targets are you hitting at 1000yds?
What are your first-hit rates on those target?
No, not for legit subsonic rounds I wouldn't. I've drilled many a primer pocket and loaded quite a few subsonic rounds for my son-in-law (he has a night, day, helo, or dogs hog hunting business). My .338 has more energy at 1000 yards than a .308 subsonic round does at 100 yards. The bullet is not stable for very long.
On the other hand, I've seen more than a few people hit a fairly small target with a pistol, .45ACP and 9mm, at 300 yards with pretty impressive consistency once they were on. I also watched a guy hit the "Buffalo" 10" plate at 800 yards) at a black powder range with bone stock AR-15, no scope. Once he was on he could make consistent, repetitive hits very quickly. I wouldn't want him shooting at me.
#### FredA
##### Well-Known Member
You all should go to a benchrest match sometime.
I've made the final at EBR - but that's airguns.
I do intend to hit the "King of the Mile" or one of those competitions once travel resumes.
I know there's a lot of talk about long range shooting.
But not seen much evidence that people actually do it much outside of those competing.
#### Michael L
##### Random Pixel Generator
TRF Supporter
I shoot 800+ when I can. I'm currently working in NW NM but I don't have my rifle, etc here (etc as in reloading bench. My bullets don't come in a box). There's a lot of BLM land here and a lot of mile+ space. I'm hoping to do some shooting while I'm here. I'll grab the .338 when I'm home next. There's a lot of space here but also a lot of wind... the 338 with a 300gr SMK doesn't mind the wind as much as the smaller bore rifles do.
#### Banzai88
##### Lvl 1,Wallet....Destroyed
TRF Supporter
Cans and 1000+ yards don't mix well
Not true by any stretch of the imagination. Cans are, by no means, meant for subsonic shots only.
Last edited:
#### rharshberger
##### Well-Known Member
Not true by any stretch of the imagination. Cans are, by no means, meant for subsonic shots only.
However, suppressors do have a measurable effect on accuracy, and much of it is dependent on the quality of the can, andcthe load for the rifle. 1000yd shots are doable with suppressors.
#### Michael L
##### Random Pixel Generator
TRF Supporter
Personally, I don't care for using a can on a rifle and using standard loads. It's hard on the internals of some cans. It's still noisy so what's the point? My son-in-law shoots full power loads through his cans all the time on his hog and coyote hunts.
Note the word "personally" at the start of the sentence. I don't care what other people do as long as it's not pointed at me.
1,000 yard shots are doable with open sites... it's doable with no sites... How repeatable is it? Are you just slinging lead or trying to land the projectile inside of a small, sub moa, circle consistently?
If you're shooting subsonic rounds through a can at 1000 yards the effect is measurable and significant. It's not rocket science it's physics. Actually... it kind of is rocket science...
.308, 200 gr Lapua subsonic bullet, Trail Boss Powder (hard to come by these days, or so I'm told), . Primer hole bored 9/64" These days most people say that opening up the primer flash hole is not necessary. That suits me fine. It's a pita to put a case in a collet chuck, bore it, take the case out, rinse, repeat and it makes the case "special".
My subsonic loads average 1,025 fps. Supersonic, at seal level. is 1,100 fps. 1,000 yards, 80 degrees F, sea level, 100 yard zero
Bullet impact relative to LOS
-1,888.1" / -157' / -180.1 MOA / 720 clicks (sounds like a lot but it's only 3 degrees)
Zero mph wind. Windage
41.1 inches right
3.9 MOA
16 clicks
Velocity at impact: 755 fps. Still useful but at somewhere between 200 and 300 yards the bullet starts to tumble. Bye bye accuracy
Time of Flight: 3.451s Lots of time for a little gust, dragonfly, etc to affect the flight
Energy at impact: 253 fpe (foot pounds of energy) 100fpe is considered the minimum to inflict a wound
.308 with 168gr bullet (Berger, SMK, whatever). I don't recall the powder and primer used. Its' a hunting load
Bullet impact relative to LOS
-414.02" / -34.5'' / -39.5 MOA / 158 clicks
Zero mph wind. Windage
11.1 inches right
1.1 MOA
4 clicks
Velocity at impact:1,123 fps. That'll leave a mark...
Time of flight: 1.766s
Energy at impact: 472 fpe (foot pounds of energy)
A .338 300gr SMK or Berger at 1000 yards drops 25 moa (about 12'), hits at 1,763 fps, 2071 fpe and takes 1.347s to arrive.
I will say this in favor of cans and using a "normal" bullet - a few years ago Shawn Carlock showed his .338 Edge with a can that he made on it. He wasn't and didn't sell them. He said that he could shoot the rifle without hearing protection and accuracy was still decent.
#### teepot
##### Well-Known Member
TRF Supporter
I have a YHM 30 cal can and it's rated for full auto. I usually use full power loads. It quiets the muzzle blast but you still hear the crack as it goes down range. I also use it on an AR. When I've shot subsonic rounds thru it you hear the hammer fall and a poof. My .22 cans I almost always shoot subsonic I have a Gemtech and a Spikes.
#### John Kemker
##### Well-Known Member
TRF Supporter
Useless Trivia:
The 'silencer,' more accurately known as the suppressor, was invented by Hiram Percy Maxim, the founder of the American Radio Relay League (ARRL), the Amateur Radio society for the United States. He did it so he could shoot on his property without unduly disturbing his neighbors. The same design elements also became the automobile muffler...
dah-dah-di-di-dit di-di-dit-dah-dah
PS: His dad invented the Maxim machine gun.
50 BMG APIT...
#### MClark
##### Well-Known Member
I shot a Trapdoor Springfield with black powder loads at 1200 yards.
The range marks on original sites are close with the standard load at 1350, barely supersonic. Standing offhand with no support we could hold about a 8 foot group. Sounds like a terrible group but try it.
And the 405 grain lead bullet really will go through a railroad tie at a 1000 yards!
#### Michael L
##### Random Pixel Generator
TRF Supporter
I shot a Trapdoor Springfield with black powder loads at 1200 yards.
The range marks on original sites are close with the standard load at 1350, barely supersonic. Standing offhand with no support we could hold about a 8 foot group. Sounds like a terrible group but try it.
And the 405 grain lead bullet really will go through a railroad tie at a 1000 yards!
Handheld... none for me thanks. It doesn't take much "wiggle" to get outside of that 8' box
#### rfjustin
##### Well-Known Member
I shot a Trapdoor Springfield with black powder loads at 1200 yards.
The range marks on original sites are close with the standard load at 1350, barely supersonic. Standing offhand with no support we could hold about a 8 foot group. Sounds like a terrible group but try it.
And the 405 grain lead bullet really will go through a railroad tie at a 1000 yards!
8 foot group at for that style weapon at 1200 yards ain't bad at all!
#### MClark
##### Well-Known Member
Over about 300 yards the Trapdoor was used for volley fire. With a good range estimate a dozen guns fired at once is very likely to be able to hit a man sized target at 1000.
#### FredA
##### Well-Known Member
8 foot group at for that style weapon at 1200 yards ain't bad at all!
Hmm.....most of my precision shooter friends are talking quarter-MOA or better.
One MOA at 1000yards is 10 inches.
At our range - we are targeting 1-sqft steels all the way to 1600yards.
Hitting those on the first shot is the goal.
But we're not trying that off-hand nor with BP loads.....
|
2021-05-05 22:42:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30131834745407104, "perplexity": 7339.70337939392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988696.23/warc/CC-MAIN-20210505203909-20210505233909-00333.warc.gz"}
|
https://www.greencarcongress.com/2015/02/opinion-why-oil-prices-must-go-up.html
|
## Opinion: Why oil prices must go up
##### 19 February 2015
by Nick Cunningham for Oilprice.com
It may be difficult to look beyond the current pricing environment for oil, but the depletion of low-cost reserves and the increasing inability to find major new discoveries ensures a future of expensive oil.
While analyzing the short-term trajectory of oil prices is certainly important, it obscures the fact that over the long-term, oil exploration companies may struggle to bring new sources of supply online. Ed Crooks over at the FT persuasively summarizes the predicament. Crooks says that 2014 is shaping up to be the worst year in the last six decades in terms of new oil discoveries (based on preliminary data).
Worse still, last year marked the fourth year in a row in which new oil discoveries declined, the longest streak of decline since 1950. The industry did not log a single “giant” oil field. In other words, oil companies are finding it more and more difficult to make new oil discoveries as the easy stuff runs out and the harder-to-reach oil becomes tougher to develop.
The inability to make new discoveries is not due to a lack of effort. Total global investment in oil and gas exploration grew rapidly over the last 15 years. Capital expenditures increased by almost threefold to $700 billion between 2000 and 2013, while output only increased 17% (see IEA chart). Despite record levels of spending, the largest oil companies are struggling to replace their depleted reserves. BP reported a reserve replacement ratio—the volume of new reserves added to a company’s portfolio relative to the amount extracted that year—of 62%. Chevron reported 89% and Shell posted just a 26% reserve replacement figure. ExxonMobil and ConocoPhillips fared better, each posting more than 100%. Still, unless the oil majors significantly step up spending they will not only be unable to make new discoveries, but their production levels will start to fall (some of them area already seeing this begin to happen). The IEA predicts that the oil industry will need to spend$850 billion annually by the 2030s to increase production. An estimated $680 billion each year—or 80% of the total spending—will be necessary just to keep today’s production levels flat. However, now that oil prices are so low, oil companies have no room to boost spending. All have plans to reduce expenditures in order to stem financial losses. But that only increases the chances of a supply crunch at some point in the future. Put another way, if the oil majors have been unable to make new oil discoveries in years when spending was on the rise, they almost certainly won’t be able to find new oil with exploration budgets slashed. Long lead times on new oil projects mean that the dearth of discoveries in 2014 don”t have much of an effect on current oil prices, but could lead to a price spike in the 2020’s. All of this comes despite the onslaught of shale production that US companies have brought online in recent years. US oil production may have increased by 60 to 70% since 2009, but the new shale output still only amounts to around 5% of global production. Not only that, but shale production is much more expensive than conventional drilling. As conventional wells decline and are replaced by shale, the average cost per barrel of oil produced will continue to rise, pushing up prices. Moreover, with rapid decline rates, the shale revolution is expected to fade away in the 2020’s, leaving the world ever more dependent on the Middle East for oil supplies. The problem with that scenario is that the Middle East will not be able to keep up. Middle Eastern countries “need to invest today, if not yesterday” in order to meet global demand a decade from now, the International Energy Agency’s Chief Economist Fatih Birol said on the release of a report in June 2014. In fact, half of the additional supply needed from the Middle East will have to come from a single country: Iraq. Birol reiterated those comments on February 17 at a conference in Japan, only his warnings have grown more ominous as the security situation in Iraq has deteriorated markedly since last June. “The security problems caused by Daesh (IS) and others are creating a major challenge for the new investments in the Middle East and if those investments are not made today we will not see that badly needed production growth around the 2020s,” Birol said, according to Reuters. If Iraq fails to deliver, the world could see oil prices surge at some point in the coming decade. Despite the urgency, “the appetite for investments in the Middle East is close to zero, mainly as a result of the unpredictability of the region,” he added. ### Comments Adding to the myth that oil is scarce and running out. There is an unlimited supply of oil, the only issue is production cost. Yes the marginal cost, or the cost of the next barrel, increases. However, that has always been true from the first barrel produced. The major oil companies have not and will not publicly disclose their actual reserves. Its a closely guarded secret known only to senior management. If you carefully read any such disclosure from a major, you will find all kinds of caveats. On the other side of the equation, worldwide demand is declining and unlikely to increase as fuel efficiency technology kicks in. This is a pretty good summarization of the situation. Oil prices don't have to go up, but they will. The truth is that economically recoverable oil is becoming more and more scarce. We are running out of economically recoverable oil. Gasoline prices near$4 a gallon will help with future exploration, but it will also help renewable sources, which do not become more expensive with time, but rather become less expensive with time. There is no worry about having enough energy, the only worry happens when you insist that it be oil. Oil limits our world now and if we don't recognize that then it will limit us in the future too.
Personally I think the old world bankers who still dominate the world economy are so in love with war and destruction that they lament the inevitable destruction of oil, thus their reluctance to invest in the coming future technologies. They may somewhat fear that they will lose control of markets, and they may, but mostly they love the war. It's so much fun to be the hand that sweeps away nations as if God, and they will miss that.
With autonomous BEVs coming in just a few years after 2020 and US oil shale production still growing on a monthly basis despite 50 USD per barrel I am less worried about oil than I have ever been. I still think it is unfortunate that the Saudis have let the oil price fall this fast to such a low level. Clearly, 50 USD is very painful for the high cost conventional oil drilling in the US. This is evident from the 40% annual drop in vertical and directional drilling rigs that are drilling in conventional oil fields. The 13% annual drop in horizontally drilling rigs that are drilling in oil shale is less dramatic a decline because it has become the lowest hanging fruit in the oil industry.
Source for increases in us shale oil production
http://www.eia.gov/petroleum/drilling/#tabs-summary-2
Source for drops in various US drilling rigs
http://www.wtrg.com/rotaryrigs.html
Solar energy can be collected, stored and transformed into H2, NG, Diesel, Jetfuel, Gasoline etc.
There is enough solar energy to fill the energy needs at least 1,000 to 100,000 times.
I think that we can't predict the price of oil because the price go up and down each day. Production is hard to predict but they have more tools than ever to discover oil and extract it, they rely more and more on higher technology. If ever they succeed in inventing cheap abundant new generation nuclear they can use electricity to make synthetic gasoline or other fuels so it will lead to a massive surplus of gasoline and I will drive my car to the other end of the country.
The comments to this entry are closed.
|
2023-01-31 01:23:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22197844088077545, "perplexity": 1828.2327958494238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499831.97/warc/CC-MAIN-20230130232547-20230131022547-00625.warc.gz"}
|
https://math.stackexchange.com/questions/2324567/trigonometric-functions-of-non-acute-angles
|
Trigonometric functions of non-acute angles.
$\sin \theta = \frac {\text{perpendicular}} {\text{hypotenuse}}$
$\cos \theta = \frac {\text{base}} {\text{hypotenuse}}$
But in unit circles, we find the trigonometric values of obtuse angles. How is that even possible when there's no right angled triangle present?
• You just define the sine of obtuse angles that way. There is no laws in mathematics. – Kenny Lau Jun 16 '17 at 4:13
A lot of the time, mathematicians will devise something that works for a particular case, and then see if they can extend or generalise it to a bigger area. For example, you can use fairly basic proofs to show that if $n$ and $m$ are integers, then $a^n \times a^m = a^{n + m}$. So mathematicians ask - do they have to be integers? Can we extend the exponentiation rule so that it works for any $n$ and $m$? And if you just take it as given, then it works fine, and then when you devise some alternative ways of expressing exponentiation you can prove it explicitly and it doesn't really break anything.
So, similarly, you can start with $\sin \theta = \frac{\mbox{opposite}}{\mbox{hypotenuse}}$ for $0 \leq \theta \leq \pi$, then you derive a few neat properties of the function, then you draw a circle around your triangle and scale it down so the radius is equal to 1, and then you ask "but what if I used a different angle?" and you can see that you can assign values to the trigonometric functions based on the point on the circle, and then you just confirm that your nice properties still hold - things like $\sin^2 \theta + \cos^2 \theta = 1$ and so forth - and you just say "we will define the sine function to be like this" and it all works nicely.
Which is not to say that it always works nicely - after you make such an extension you still have to prove that it makes sense and doesn't break anything else, otherwise you wind up with things like $1 - 1 + 1 - 1 + \ldots = -\frac{1}{2}$.
So you have the angle, then you just draw your triangle like that.
Then, you have $\displaystyle \sin(135º)=\frac{\sqrt{2}}{2}$, and $\displaystyle \cos(135º)=-\frac{\sqrt{2}}{2}$.
Hope I helped!
Another picture, for $225º$.
Here, $\displaystyle \cos(225º)=\sin(225º)=-\frac{\sqrt{2}}{2}$.
I will 'define' the trigonometric ratios in this way.
Suppose that I am standing and holding a rod of length $1$ unit in front of me, with one end of the rod next to my foot on the ground. The rod has an inclination of $\theta$ to the ground. $\sin\theta$ is the distance between the other end of the rod and the ground. The sun is directly above me and casting a shadow of the rod on the ground. $\cos\theta$ is the length of the shadow. $\tan\theta$ is the quotient of the two lengths.
If $\theta$ is obtuse, the rod is behind me, with one end still next to my foot. The distance between the other end of the rod and the ground is still $\sin\theta$, and it is positive. The shadow of the rod is now behind me, and I take its length as negative. This is $\cos\theta$ and we have $\cos\theta<0$. $\tan\theta$ is the quotient of this two lengths, which is also negative.
I can extend this 'definition' to a general $\theta$. If I hold the rod vertically, $\theta=90^\circ$. $\sin90^\circ$ is the distance between the other end of the rod and the ground, which is now equal to the length of the rod. So $\sin90^\circ =1$. The length of the shadow is $0$ and so $\cos90^\circ =0$.
If we take $\theta=200^\circ$, the rod is below the ground (we have to imagine that). So $\sin200^\circ$ is negative....
• Can you add a diagram to make things clearer? – user342531 Jun 16 '17 at 6:27
• @Abcd My 'definition' is essentially the unit circle definition of trigonometric ratio, but I avoid to use the abstract setting of the coordinate plane. I use this method to teach my students and they almost at once can tell why $\sin 150^\circ=\sin30^\circ$ or $\cos 310$ is positive or $\tan 270^\circ$ is undefined. I will take a long stick to my class to teach them. But I'm not good at drawing. It will be great if someone can draw a picture. But I'm not able to do that. Sorry for that. Besides, I am not native in English. So maybe I cannot express my idea very well. – CY Aries Jun 16 '17 at 6:36
|
2020-10-28 19:35:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8663037419319153, "perplexity": 162.15522804479363}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107900860.51/warc/CC-MAIN-20201028191655-20201028221655-00527.warc.gz"}
|
https://www.bartleby.com/solution-answer/chapter-26-problem-4mad-financial-and-managerial-accounting-15th-edition/9781337902663/assume-boulder-creek-industries-in-mad-26-3-assigns-the-following-probabilities-to-the-estimated/e20d5c1d-756e-11e9-8385-02ee952b546e
|
# Assume Boulder Creek Industries in MAD 26-3 assigns the following probabilities to the estimated annual net cash flows: a. Compute the expected value of the annual net cash flows. b. Determine the expected net present value of the equipment, assuming a desired rate of return of 12% and expected annual net cash flows computed in part (a). Use the present value tables (Exhibits 2 and 5) provided in the chapter in determining your answer. c. Based on your results in parts (a) and (b), should Boulder Creek Industries invest in the equipment?
### Financial And Managerial Accounting
15th Edition
WARREN + 1 other
Publisher: Cengage Learning,
ISBN: 9781337902663
Chapter
Section
### Financial And Managerial Accounting
15th Edition
WARREN + 1 other
Publisher: Cengage Learning,
ISBN: 9781337902663
Textbook Problem
1 views
## Assume Boulder Creek Industries in MAD 26-3 assigns the following probabilities to the estimated annual net cash flows: a. Compute the expected value of the annual net cash flows. b. Determine the expected net present value of the equipment, assuming a desired rate of return of 12% and expected annual net cash flows computed in part (a). Use the present value tables (Exhibits 2 and 5) provided in the chapter in determining your answer. c. Based on your results in parts (a) and (b), should Boulder Creek Industries invest in the equipment?
a.
To determine
Compute the expected value of the annual net cash flows.
### Explanation of Solution
Compute the expected value of the annual net cash flows:
Annual net cash flow Probability of occurring Expected Value $800,000 0.60$480,000 \$600,000 0...
b.
To determine
Compute the net present value of the equipment, assuming 12% desired rate of return for the given annual net cash flows in part (a).
c.
To determine
Recommend whether Company BC should invest in the equipment based upon parts (a) and (b).
### Still sussing out bartleby?
Check out a sample textbook solution.
See a sample solution
#### The Solution to Your Study Problems
Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees!
Get Started
Find more solutions based on key concepts
|
2020-02-26 16:59:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25158095359802246, "perplexity": 3764.7742761046056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146414.42/warc/CC-MAIN-20200226150200-20200226180200-00064.warc.gz"}
|
https://blog.acolyer.org/2019/11/04/an-analysis-of-performance-evolution-of-linuxs-core-operations/
|
# An analysis of performance evolution of Linux’s core operations
An analysis of performance evolution of Linux’s core operations Ren et al., SOSP’19
I was drawn in by the headline results here:
This paper presents an analysis of how Linux’s performance has evolved over the past seven years… To our surprise, the study shows that the performance of many core operations has worsened or fluctuated significantly over the years.
When you get into the details I found it hard to come away with any strongly actionable takeaways though. Perhaps the most interesting lesson/reminder is this: it takes a lot of effort to tune a Linux kernel. For example:
• “Red Hat and Suse normally required 6-18 months to optimise the performance an an upstream Linux kernel before it can be released as an enterprise distribution”, and
• “Google’s data center kernel is carefully performance tuned for their workloads. This task is carried out by a team of over 100 engineers, and for each new kernel, the effort can also take 6-18 months.”
Meanwhile, Linux releases a new kernel every 2-3 months, with between 13,000 and 18,000 commits per release.
Clearly, performance comes at a high cost, and unfortunately, this cost is difficult to get around. Most Linux users cannot afford the amount of resource large enterprises like Google put into custom Linux performance tuning…
For Google of course, there’s an economy of scale that makes all that effort worth it. For the rest of us, if you really need that extra performance (maybe what you get out-of-the-box or with minimal tuning is good enough for your use case) then you can upgrade hardware and/or pay for a commercial license of a tuned distributed (RHEL).
A second takeaway is this: security has a cost!
### Measuring the kernel
The authors selected a set of diverse application workloads, as shown in the table below, and analysed their execution to find out the system call frequency and total execution time.
A micro-benchmark suite, LEBench was then built around tee system calls responsible for most of the time spent in the kernel.
On the exact same hardware, the benchmark suite is then used to test 36 Linux release versions from 3.0 to 4.2.0.
All kernel operations are slower than they were four years ago (version 4.0), except for big-write and big-munmap. The majority (75%) of the kernel operations are slower than seven years ago (version 3.0). Many of the slowdowns are substantial…
The following figure shows the relative speed-up/slow-down across the benchmarked calls (y-axis) across releases (x-axis). The general pattern to my eye is that things were getting better / staying stable until around v4.8-v.14, and after that performance starts to degrade noticeably.
### Analysis
We identify 11 kernel changes that explain the significant performance fluctuations as well as more steady sources of overhead.
These changes fall into three main groups:
1. (4) Security enhancements (e.g. to protect against Meltdown and Spectre) ).
2. (4) New features introduced into the kernel that came with a performance hit in some scenarios
3. (3) Configuration changes
In terms of the maximum combined slowdown though, it’s not the Meltdown and Spectre patches that cause the biggest slowdowns (146% cf. a 4.0 baseline), but missing or misconfigured configuration changes (171%). New features also contribute a combined maximum slowdown of 167%. If you drill down into the new features though, some of these are arguably security related too -e.g. the cgroup memory controller change for containers.
The following chart shows the impact of these 11 changes across the set of system calls under study.
It’s possible to avoid the overheads from these 11 changes if you want to, but that doesn’t feel like a path to recommend for most of them!
With little effort, Linux users can avoid most of the performance degradation from the identified root causes by actively reconfiguring their systems. In fact, 8 out of 11 root causes can be disabled through configuration, and the other 3 can be disabled through simple patches.
Testing against real-world workloads (Redis, Apache, Nginx), disabling the 11 root causes results in maximum performance improvements in these three applications of 56%, 33%, and 34% respectively. On closer examination, 88% of the slowdowns experienced by these applications can be tied back to just four of the eleven changes: forced context tracking (a configuration error), kernel page table isolation (Meltdown protection), missing CPU idle power states (in the configuration bucket, but really due to older kernel versions lacking specifications for the newer hardware used in the benchmarking, which is kind of fair game?), and avoidance of indirect jump speculation (Spectre).
#### Security related root causes
1. Kernel page table isolation (KPTI), introduced to protect against Meltdown. The average slowdown caused by KPTI across all microbenchmarks in 22%, with recv and read tests seeing 63% and 59% slowdowns. Before KPTI, the kernel and user space used one shared page table, with KPTI they have separate page tables. The main source of introduced overhead is swapping the page table pointers on every kernel entry and exit, together with a TLB flush. The flush can be avoided on processors with the process-context identifier (PCID) feature, but even this isn’t enough to avoid the reported slowdowns.
2. Avoidance of indirect branch speculation (the Retpoline patch) to protect against Spectre. This causes average slowdowns of 66% across the select, poll, and epoll tests. The more indirect jumps and calls in a test, the worse the overhead. The authors found that turning each indirect call here into a switch statement (a direct conditional branch) alleviates the performance overhead. ![][FIG6]
3. SLAB freelist randomization, which increases the difficulty of exploiting buffer overflow bugs. By randomising the order of free spaces for objects in a SLAB, there is a notable overhead (37-41%) when sequentially accessing large amounts of memory.
4. The hardened usercopy patch, which validates kernel pointers used when copying data between userspace and the kernel.
#### New-feature related root causes
1. The ‘fault-around’ feature aims to reduce the number of minor page faults, but introduces a 54% slowdown in the ‘big-pagefault’ test where its access pattern assumptions do not hold.
2. The cgroup memory controller was introduced in v2.6 and is a key building block of containerization technologies. It adds overhead to tests that exercise the kernel memory controller, even when cgroups aren’t being used. It took 6.5 years (until v 3.17) for this overhead to begin to be optimised. Before those optimisations, slowdowns of up to 81% were observed, afterwards this was reduced to 9%.
3. Transparent huge pages (THP) have been in and out and in and out again as a feature enabled by default. THP automatically adjusts the default page size and allocates 2MB (huge) pages, but can fall back to 4KB pages under memory pressure. Currently it is disabled by default. In what seems to be a case of damned-if-you-do, damned-if-you-don’t, without THP some tests are up to 83% slower.
4. Userspace page fault handling allows a userspace process to handle page faults for a specific memory region. In most cases its overhead is negligible, but the big-fork test sees a 4% slowdown with it.
#### Configuration related root causes
1. Forced context tracking was released into the kernel by mistake in versions 3.10 and 3.12-15 (it’s a debugging feature using in the development of the reduced scheduling clock-ticks – RSCT – feature). It was enabled in several released Ubuntu kernels due to misconfiguration. Forced context tracking was finally switched off 11 months after the initial misconfiguration. It slowed down all of the 28 tests by at least 50%, 7 of them by more than 100%.
2. The TLB layout change patch was introduced in v3.14, and enables Linux to recognise the size of the second-level TLB on newer Intel processors. It’s on the list as a configuration related problem since there was a six-month period when the earliest Haswell processors were released but the patch wasn’t, resulting in a slowdown running on those processors.
3. The CPU idle power-state support patch similarly informs the kernel about fine-grained power-saving states available on Intel processors. It’s on the list because it wasn’t backported to the LTS kernel lines at the time, giving reduced performance on newer processors with those kernels.
|
2020-10-25 02:45:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28377050161361694, "perplexity": 3518.4632475251133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107885126.36/warc/CC-MAIN-20201025012538-20201025042538-00314.warc.gz"}
|
https://www.integreat.ca/OL/syll_CALC.html
|
Home FAQs Contact
Prof. Sally Keely
Mathematics Dept.
Clark College
## Math& 151 Calculus I Fall 2021
ver. 2021.Fall
rev. 2021-09-19
This document contains information specific to this class and is particularly useful in the first week of the term.
It supplements the main MATHEMATICS ONLINE SYLLABUS which contains key information and policies.
1. Course Information:
2. Course Materials:
Course Information
Course Description:
Course name Calculus I, Course number MATH& 151, 5 credits, section D04D, item #1943
Dates: 9/20/2021 – 12/7/2021 including the final examination.
Format: This is a fully online course conducted asynchronously in Canvas and MyMathLab. This is NOT a "remote learning" format, thus has no real-time class meetings (ie. NO Zoom). FAQ - What's the difference between online and remote class? What does "asynchronous" mean?
Coverage: Calculus I is the first in a four-quarter sequence of Calculus for Engineering and the Sciences. Calculus I covers limits, continuity, differentiation including applications, curve sketching, anti-derivatives, and introduces integration including numeric integration.
The goal of this course is not only to learn calculus, but also to improve your critical thinking and problem-solving abilities. This is a challenging technical course intended for those in STEM (science, technology, engineering, mathematics) majors. The quantitative skills requirement [Q] of the general education distribution is met by this course.
Prerequisite:
To qualify for Calculus I you must have met BOTH of the following prerequisites: (1) College Algebra (Clark's math 110 or 111) passed with a grade of at least "C" or qualifying score on the placement test, AND (2) College Trigonometry (Clark's math 103) passed with a grade of at least "C" or qualifying score on the placement test.
Outcomes:
The Mathematics Department has identified the following course level outcomes for which you should be proficient upon successful completion of the course.
1. Determine if a limit exists using appropriate techniques, and evaluate it if it does exist.
2. Demonstrate understanding of the relationship between a function and its derivatives.
3. Find a derivative by hand using basic differentiation rules.
4. Solve an applied rate of change problem.
5. Evaluate a basic integral, and interpret its meaning in context.
How will you be assessed on these outcomes? See Mathematics Online Syllabus: Outcomes Assessment.
Course Materials
MyMathLab Software and e-Textbook:
MyMathLab access with included e-textbook is REQUIRED. Physical textbook is OPTIONAL.
MyMathLab (MML) access is required to access the tests. MML is a digital learning platform provided by the textbook publisher. In MML you can read the e-textbook, watch video tutorials, work interactive practice problems, and take tests. The complete textbook is included as an online e-textbook inside MyMathLab.
e-Textbook that is included inside MyMathLab: Calculus Early Transcendentals (3rd edition) by Briggs/Cochran/Gillett/Schultz, ISBN 97801347636446. If you already have access to this e-textbook in MML from a previous term, then there is nothing to buy. Simply switch your course ID code to the new code that I’ll provide in an announcement in Canvas.
Purchasing MML Access: MyLab Math Multi-Term Access Card w/ Ebook for Calculus 3/E by Briggs+, ISBN 9780134856926. This ISBN is for 3-term access, approx. $111 in Clark bookstore. If you only wish to purchase one-term worth of access, consider buying directly through Pearson online where you can purchase 18-week access for$70. For two-term access, Pearson has a 24-week access for approx. \$105. If you know you will be taking Math&153 with the same textbook this is a tad savings, but careful that you don’t activate your code until after your 2-week free temporary access ends, so it can last right through the end of next term.
|
2021-12-04 13:47:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4413103461265564, "perplexity": 4831.065673424826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362992.98/warc/CC-MAIN-20211204124328-20211204154328-00426.warc.gz"}
|
http://graphswithcodeanddata.xyz/2018/09/01/catcher-defense-with-linear-mixed-effect-models/
|
# Catcher Defense with Linear Mixed Effect Models
## 2018/09/01
In baseball, measuring catcher defense is notoriously tricky. This is because it includes things like pitch framing and game calling that aren’t captured in traditional stats sheets. One approach to this is to do a “with-or-without you” analysis, in other words look at the difference in outcome depending on catcher, holding as many other contributing factors constant as possible. One prominent example of this is Tom Tango’s analysis of passed balls and wild pitches - events where the responsibility can reasonably be assumed to be limited to the pitcher and catcher.
A related approach is to use generalized linear mixed effect models. Notably, this is the approach taken by Baseball Prospectus in their measurements of catcher framing.
The analysis I present here lies somewhere in between these two - but is somewhat orthogonal as well. It’s different than the Baseball Prospectus approach because it controls for many fewer factors and it’s orthogonal to both because it measures an outcome that has many more causal factors in addition to pitcher and catcher identity.
## My analysis
My analysis looks at the number of runs scored by the opponent, and tries to assign credit to the catcher. There are lots of factors besides catcher that contribute to runs scored, for example,
• pitcher
• defense
• batters(s)
• park
This makes assigning credit to the catcher inherently more noisy than something more constrained like wild pitches, passed balls, or called strikes. Nevertheless, I’m going to fit the model and see what happens - as I’ve written before I don’t believe in letting the perfect be the enemy of the good
The present analysis is an alternative version of a topic I’ve looked at before, which is estimating catcher defense back to 1911 using the same linear mixed effect modeling framework. For the work linked above I used only game final scores. What I do here is to count only the runs scored through the end of the 5th inning. The logic behind that choice is that it will mitigate complications of knowing which relief pitchers were brought in - it’s mostly true that the starting pitcher will still be there in the 5th. In the current analysis I use retrosheet play-by-play data from 1961 - 2017.
## linear mixed effect models
Linear mixed effect models are appropriate when there are groups within the data, and there are coefficients that vary across groups. The canonical example I think of is schools - if you model outcomes (test scores, or graduation rates, say) as a function of spending on after school programs or something similar, you would expect a positively correlated relationship between outcomes and spending. However, each school starts from it’s own baseline. So if you threw them all in to the same linear regression model, you’d get a model that doesn’t match the reality. In the linear mixed effect model framework you can fit a slope for outcome vs spending that is constant across groups (the fixed effect) and a intercept that is different for each group (the random effect). In the baseball context, the catchers, pitchers, park, offense and defense are all modeled as random effects.
The linear mixed effect model itself is essentially a regression with a L2 penalty - a ridge regression. One main difference is that in a ridge regression you specify the penalty strength, and the same value applies to all variables. In the linear mixed effect model, the penalty is applied only to the random effects, the strength of the penalty varies across groups, and the strength values aren’t specified but are estimated based on maximum likelihood analysis of the profiled likelihood - the likelihood integrated over the random effect values. Technically what the model gives as output is the mode of the random effects values. In many cases this is a good approximation to the mean of the posterior probability distribution, but in general has a somewhat squishy interpretation.
A great interactive visualization of mixed effects models is available here. A good technical discussion is available in the lme4 “Computational Methods” vignette
## The data
For this analysis I pulled event data from my local copy of the retrosheet play-by-play data. I limited the seasons from 1961 through 2017. For each game I took the starting catcher ID, the starting pitcher ID, the home team ID (i.e. the park), the season year, the score through the end of the 5th inning. The data are available as a gist here.
library(dplyr)
# load the data
#> Parsed with column specification:
#> cols(
#> game_id = col_character(),
#> year_id = col_integer(),
#> bat_home_id = col_integer(),
#> off_score = col_integer(),
#> def_pitcher = col_character(),
#> def_catcher = col_character(),
#> off_team = col_character(),
#> def_team = col_character(),
#> park_id = col_character(),
#> park = col_character()
#> )
df$year_id = factor(df$year_id)
#> # A tibble: 6 x 10
#> game_id year_id bat_home_id off_score def_pitcher def_catcher off_team
#> <chr> <fct> <int> <int> <chr> <chr> <chr>
#> 1 WS219610… 1961 0 3 danib102_1… retzk101_1… MIN_1961
#> 2 WS219610… 1961 0 2 mcclj104_1… retzk101_1… MIN_1961
#> 3 WS219610… 1961 0 1 danib102_1… retzk101_1… KC1_1961
#> 4 WS219610… 1961 0 1 hobae101_1… retzk101_1… BOS_1961
#> 5 WS219610… 1961 0 0 burnp102_1… retzk101_1… BOS_1961
#> 6 WS219610… 1961 0 2 gablg102_1… dalep101_1… CHA_1961
#> # ... with 3 more variables: def_team <chr>, park_id <chr>, park <chr>
Note that each pitcher, team, defense, etc have to have the season appended so that each season is treated as a different entity.
The model I use is this,
library(lme4)
model_catcher_def = function(df) {
lmer_mod = lmer(off_score ~
1 + year_id +
(1|park_id) + (1|off_team) +
(1|def_team) + (1|def_pitcher) +
(1|def_catcher), data=df)
}
## model analysis
Here I execute the model
lmer_mod = model_catcher_def(df)
Here’s the summary of the standard deviation of the random effects. One interpretation of these is that the higher the value, the more significant that factor is in determining the outcome. So we can see that pitcher is most important, followed by park, offensive team, defensive team and finally, catcher - it would certainly be surprising if we had found catcher was the most important factor!
summary(lmer_mod)$varcor #> Groups Name Std.Dev. #> def_pitcher (Intercept) 0.324146 #> def_catcher (Intercept) 0.099711 #> def_team (Intercept) 0.182797 #> off_team (Intercept) 0.197680 #> park_id (Intercept) 0.237530 #> Residual 2.306134 In order to extract the random effect values, I define some helpers # parse random effects to a data frame ranef_to_df = function(lmer_mod, ranef_nm) { rr = ranef(lmer_mod) data.frame(k=rownames(rr[ranef_nm][[1]]), value=rr[ranef_nm][[1]][,1], ranef_nm = ranef_nm, stringsAsFactors = FALSE) } # loop over all random effects and parse to data frame parse_ranefs = function(lmer_mod) { rfs = names(ranef(lmer_mod)) ll = lapply(rfs, function(rf) { ranef_to_df(lmer_mod, rf) }) %>% dplyr::bind_rows() } Here I get all the random effects ranef_summary_df = parse_ranefs(lmer_mod) The columns are: • k: the unique key of the random effect • value: the value, in units of runs per 15 outs • ranef_nm: the name of the random effect Let’s see which keys had the highest (and lowest) values, for each random effect. First, the team variables ranef_summary_df %>% group_by(ranef_nm) %>% filter(value==min(value) | value == max(value)) %>% ungroup() %>% filter(ranef_nm %in% c("def_team", "off_team", "park_id")) #> # A tibble: 6 x 3 #> k value ranef_nm #> <chr> <dbl> <chr> #> 1 ATL_1995 -0.341 def_team #> 2 DET_1996 0.458 def_team #> 3 HOU_1964 -0.401 off_team #> 4 TOR_2015 0.464 off_team #> 5 COL_1996 0.947 park_id #> 6 SDN_1998 -0.470 park_id So the model tells us: • best defense: 1995 Braves • worst defense: 1996 Tigers The “defense” value here is based on runs scored so there’s an interdependence between pitchers and defensive play - it’s probably more correct to think of it as the best team run prevention. • best offense: 2015 Blue Jays • worst offense: 1964 Astros The 2015 Blue Jays as best offense since 1961 seems surprising. I can double check by looking at the z-score of runs scored, with the Lahman data. Lahman::Teams %>% group_by(yearID) %>% mutate(m=mean(R), s=sd(R), z=(R-m)/s) %>% select(yearID, teamID, R, m, s, z) %>% ungroup() %>% arrange(-z) %>% head(10) %>% as.data.frame() #> yearID teamID R m s z #> 1 2015 TOR 891 688.2333 58.76175 3.450657 #> 2 1915 DET 778 592.2083 62.16736 2.988573 #> 3 2007 NYA 968 777.4000 69.05450 2.760139 #> 4 1976 CIN 857 645.5000 78.12420 2.707228 #> 5 1953 BRO 955 714.1250 91.90058 2.621039 #> 6 2016 BOS 878 724.8000 59.78259 2.562619 #> 7 1982 ML4 891 696.5385 76.69432 2.535540 #> 8 2006 NYA 930 786.6333 57.18782 2.506944 #> 9 2005 BOS 910 744.1667 66.35541 2.499168 #> 10 1950 BOS 1027 750.8125 110.94096 2.489500 Well, there you have it! 2015 Blue Jays have the highest z-score for runs scored in baseball history! • largest park factor: 1996 Colorado • lowest park factor: 1998 San Diego Seems plausible. Now for the player based estimates. First I define a table to match the records and get the actual names instead of the esoteric retrosheet IDs. pl_lkup = Lahman::Master %>% mutate(nameFull = paste(nameFirst, nameLast)) %>% dplyr::select(retroID, nameFull) Get the player based min and max random effects player_summary = ranef_summary_df %>% group_by(ranef_nm) %>% filter(value==min(value) | value == max(value)) %>% filter(ranef_nm %in% c("def_pitcher", "def_catcher")) # add a new column to strip the year from the name to merge # with the player lookup player_summary$retroID = sapply(
stringr::str_split(player_summary$k, "_"), function(s) {s[[1]]}) player_summary %>% merge(pl_lkup, by="retroID") %>% dplyr::select(nameFull, k, value, ranef_nm) #> nameFull k value ranef_nm #> 1 Mike Heath heatm001_1985 0.1272693 def_catcher #> 2 Charles Johnson johnc002_1996 -0.1073388 def_catcher #> 3 Randy Johnson johnr005_2004 -0.6903628 def_pitcher #> 4 Todd Van Poppel vanpt001_1996 0.5976675 def_pitcher So we have • best pitcher: 2004 Randy Johnson • worst pitcher: 1996 Todd Von Poppel Passes the sniff test. • best catcher: 1996 Charles Johnson • worst catcher: 1985 Mike Heath I recall Charles Johnson being considered a great defensive catcher. As far as Mike Heath, I have no idea if that’s plausible or not. ## yearly and career total values The random effects values tell us run values on a per game basis. Here I define a function to aggregate over a season, and over a career, to define a total-value counting stat. ranef_rankings = function(ranef_summary_df, ranef_nm_) { pl_lkup = Lahman::Master %>% mutate(nameFull = paste(nameFirst, nameLast)) %>% dplyr::select(retroID, nameFull) aa = ranef_summary_df %>% filter(ranef_nm == ranef_nm_) %>% merge(df, by.x="k", by.y=ranef_nm_) aa$player_id = sapply(stringr::str_split(aa$k, "_"), function(s) {s[[1]]}) aa$season = sapply(stringr::str_split(aa$k, "_"), function(s) {s[[2]]}) career = aa %>% group_by(player_id) %>% summarise(mean_value=mean(value), sum_value=sum(value)) %>% arrange(mean_value) %>% mutate(mean_rank = row_number()) %>% ungroup() %>% merge(pl_lkup, by.x="player_id", by.y="retroID") yearly = aa %>% group_by(player_id, season) %>% summarise(mean_value=mean(value), sum_value=sum(value)) %>% arrange(mean_value) %>% mutate(mean_rank = row_number()) %>% ungroup() %>% merge(pl_lkup, by.x="player_id",by.y="retroID") list(yearly = yearly, career = career) } ### pitchers Although the point here was to look at catchers, I’ll first apply this to pitchers as a sanity check. pitcher_rankings = ranef_rankings(ranef_summary_df, "def_pitcher") The top ten runs-per game by pitcher season pitcher_rankings$yearly %>%
arrange(mean_value) %>%
as.data.frame()
#> player_id season mean_value sum_value mean_rank nameFull
#> 1 johnr005 2004 -0.6903628 -24.16270 1 Randy Johnson
#> 2 martp001 2000 -0.6803334 -19.72967 1 Pedro Martinez
#> 3 schic002 2004 -0.6311972 -20.19831 1 Curt Schilling
#> 4 johnr005 1995 -0.6275477 -18.82643 2 Randy Johnson
#> 6 santj003 2006 -0.6000581 -20.40197 1 Johan Santana
#> 7 johnr005 2001 -0.5937011 -20.18584 3 Randy Johnson
#> 8 clemr001 1997 -0.5897730 -20.05228 1 Roger Clemens
#> 9 appik001 1993 -0.5600134 -19.04046 1 Kevin Appier
#> 10 martp001 2003 -0.5593239 -16.22039 2 Pedro Martinez
Seems plausible. I’d like to see Pedro 2001, and I’m not sure I like seeing Kevin Appier on there, but anyway moving on…
The top ten runs per season
pitcher_rankings$yearly %>% arrange(sum_value) %>% head(10) %>% as.data.frame() #> player_id season mean_value sum_value mean_rank nameFull #> 1 johnr005 2004 -0.6903628 -24.16270 1 Randy Johnson #> 2 carls001 1980 -0.5438898 -20.66781 1 Steve Carlton #> 3 santj003 2006 -0.6000581 -20.40197 1 Johan Santana #> 4 marij101 1963 -0.5057191 -20.22877 2 Juan Marichal #> 5 schic002 2004 -0.6311972 -20.19831 1 Curt Schilling #> 6 johnr005 2001 -0.5937011 -20.18584 3 Randy Johnson #> 7 clemr001 1997 -0.5897730 -20.05228 1 Roger Clemens #> 8 martp001 2000 -0.6803334 -19.72967 1 Pedro Martinez #> 9 marij101 1966 -0.5452669 -19.62961 1 Juan Marichal #> 10 koufs101 1963 -0.4859726 -19.43891 1 Sandy Koufax The top ten mean per game over the career pitcher_rankings$career %>%
arrange(mean_value) %>%
as.data.frame()
#> player_id mean_value sum_value mean_rank nameFull
#> 1 kersc001 -0.3617644 -104.91167 1 Clayton Kershaw
#> 2 martp001 -0.3222799 -131.81246 2 Pedro Martinez
#> 3 webbb001 -0.3018462 -59.76554 3 Brandon Webb
#> 4 salec001 -0.2989121 -53.80418 4 Chris Sale
#> 5 clemr001 -0.2977737 -210.52597 5 Roger Clemens
#> 6 santj003 -0.2959499 -84.04978 6 Johan Santana
#> 7 koufs101 -0.2847562 -60.08356 7 Sandy Koufax
#> 8 schic002 -0.2841804 -123.90265 8 Curt Schilling
#> 9 johnr005 -0.2605288 -157.09886 9 Randy Johnson
#> 10 hallr001 -0.2538522 -99.00237 10 Roy Halladay
The top ten total over the career
pitcher_rankings$career %>% arrange(sum_value) %>% head(10) %>% as.data.frame() #> player_id mean_value sum_value mean_rank nameFull #> 1 clemr001 -0.2977737 -210.5260 5 Roger Clemens #> 2 johnr005 -0.2605288 -157.0989 9 Randy Johnson #> 3 maddg002 -0.1970610 -145.8252 39 Greg Maddux #> 4 martp001 -0.3222799 -131.8125 2 Pedro Martinez #> 5 seavt001 -0.2006751 -129.8368 37 Tom Seaver #> 6 schic002 -0.2841804 -123.9027 8 Curt Schilling #> 7 mussm001 -0.2168544 -116.2340 28 Mike Mussina #> 8 palmj001 -0.2120193 -110.4621 32 Jim Palmer #> 9 blylb001 -0.1596118 -109.3341 85 Bert Blyleven #> 10 glavt001 -0.1577112 -107.5590 88 Tom Glavine ### catchers And finally, the best defensive catchers according to this methodology catcher_rankings = ranef_rankings(ranef_summary_df, "def_catcher") The top ten runs-per game by catcher season catcher_rankings$yearly %>%
arrange(mean_value) %>%
as.data.frame()
#> player_id season mean_value sum_value mean_rank nameFull
#> 1 johnc002 1996 -0.10733880 -12.129284 1 Charles Johnson
#> 2 essij001 1980 -0.09486632 -6.261177 1 Jim Essian
#> 3 hernr002 2002 -0.09423886 -11.779858 1 Ramon Hernandez
#> 4 ausmb001 2005 -0.09126695 -10.769500 1 Brad Ausmus
#> 5 pagnt001 1996 -0.08716409 -9.413722 1 Tom Pagnozzi
#> 6 lodup001 2003 -0.08639865 -10.367838 1 Paul Lo Duca
#> 7 kendj001 2007 -0.08443976 -10.977169 1 Jason Kendall
#> 8 lopej001 1994 -0.08405470 -6.051938 1 Javy Lopez
#> 9 cartg001 1979 -0.07873492 -10.629214 1 Gary Carter
#> 10 varij001 2001 -0.07814173 -3.672661 1 Jason Varitek
The top ten runs per season
catcher_rankings$yearly %>% arrange(sum_value) %>% head(10) %>% as.data.frame() #> player_id season mean_value sum_value mean_rank nameFull #> 1 johnc002 1996 -0.10733880 -12.129284 1 Charles Johnson #> 2 hernr002 2002 -0.09423886 -11.779858 1 Ramon Hernandez #> 3 kendj001 2007 -0.08443976 -10.977169 1 Jason Kendall #> 4 ausmb001 2005 -0.09126695 -10.769500 1 Brad Ausmus #> 5 cartg001 1979 -0.07873492 -10.629214 1 Gary Carter #> 6 lodup001 2003 -0.08639865 -10.367838 1 Paul Lo Duca #> 7 piazm001 1996 -0.06918476 -9.893421 1 Mike Piazza #> 8 cartg001 1982 -0.06477695 -9.781320 2 Gary Carter #> 9 moliy001 2013 -0.07435972 -9.518044 1 Yadier Molina #> 10 pagnt001 1996 -0.08716409 -9.413722 1 Tom Pagnozzi The top ten mean per game over the career catcher_rankings$career %>%
arrange(mean_value) %>%
as.data.frame()
#> player_id mean_value sum_value mean_rank nameFull
#> 1 sweer101 -0.03734670 -7.207914 1 Rick Sweet
#> 2 cartg001 -0.03618157 -70.698796 2 Gary Carter
#> 3 maill001 -0.03547138 -2.802239 3 Luke Maile
#> 4 skagd101 -0.03333700 -5.700627 4 Dave Skaggs
#> 5 moliy001 -0.03205300 -52.983609 5 Yadier Molina
#> 6 ausmb001 -0.03072853 -54.266592 6 Brad Ausmus
#> 7 moora001 -0.03063716 -2.236513 7 Adam Moore
#> 8 josec002 -0.03035304 -8.468497 8 Caleb Joseph
#> 9 johnr009 -0.03012177 -6.446059 9 Rob Johnson
#> 10 rodgb102 -0.03002153 -24.377485 10 Buck Rodgers
The top ten total over the career
catcher_rankings\$career %>%
arrange(sum_value) %>%
as.data.frame()
#> player_id mean_value sum_value mean_rank nameFull
#> 1 cartg001 -0.03618157 -70.69880 2 Gary Carter
#> 2 ausmb001 -0.03072853 -54.26659 6 Brad Ausmus
#> 3 moliy001 -0.03205300 -52.98361 5 Yadier Molina
#> 4 piazm001 -0.02755544 -44.14381 16 Mike Piazza
#> 5 penat001 -0.02413506 -42.93628 24 Tony Pena
#> 6 varij001 -0.02997632 -41.12751 11 Jason Varitek
#> 7 piera001 -0.02034347 -37.26924 42 A. J. Pierzynski
#> 8 martr004 -0.02379236 -32.92862 25 Russell Martin
#> 9 fiskc001 -0.01528992 -32.06297 76 Carlton Fisk
#> 10 bencj101 -0.01703744 -27.70288 62 Johnny Bench
## Conclusion
I’ve presented a way of using linear mixed effects models to measure overall catcher defense, encompassing framing, game calling, etc. As a consequence of the mixed effect model, we also get pitcher estimates that can be used as a sanity check. There is a tight coupling of pitcher with defense and there’s anecdotal evidence that the model has not adequately distinguished the independent effects of those two factors - and a similar, but probably lesser, effect exists for catcher numbers. So these results should be though of as, ahem, ballpark estimates.
There are a number of ways the study could be improved on in a follow up including,
• use more seasons, based on box score data
• model the runs scored as a non-Gaussian random variable, e.g. zero-inflated negative binomial
• normalize the runs distribution across seasons - as it stands it favors high run environments (because it measures absolute runs and there are more to go around when the environment is high)
|
2023-03-20 16:46:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39652305841445923, "perplexity": 11080.3230678703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00788.warc.gz"}
|
https://regencyinnrosenbergtx.com/casino-online-play/3-of-a-kind-poker.php
|
# 3 Of A Kind Poker
Veröffentlicht
Reviewed by:
Rating:
5
On 09.05.2020
Last modified:09.05.2020
### Summary:
Ist jedoch, denn mitsamt illegalen. Klar, werden die Betreiber von Online-Casinos Bonusaktionen anbieten. DГrfen Daten nur fГr die mit dem Spielbetrieb verbundenen, mГssen Sie sich vorerst.
Hier findest du die Liste aller Pokerhände (oder Pokerblätter), von der besten bis zur Ein Drilling (Three of a Kind) besteht aus drei Karten desselben Rangs. Im Kartenspiel Poker beschreibt der Begriff Hand die besten fünf Karten, die ein Spieler nutzen Drilling, im Englischen auch three of a kind oder trips genannt, ist eine Hand wie 2♢ 2♤ 2♥ K♤ 6♤, die drei Karten desselben Wertes und zwei. 1. Royal Flush. Straight Flush. 2. Straight Flush. Vierling. 3. Vierling (Four of a Kind). Full House. 4. Full House. Flush. 5. Flush. Straße. 6. Straße (Straight).
## Offizielles Reihenfolge der Poker hände
Die Drillingshand ist die sechsthöchste aller Pokerhände und besteht aus 3 Karten derselben Wertigkeit, die 2 Kicker-Karten entsprechen. 1. Royal Flush. Straight Flush. 2. Straight Flush. Vierling. 3. Vierling (Four of a Kind). Full House. 4. Full House. Flush. 5. Flush. Straße. 6. Straße (Straight). Wenn du und dein Gegner beide ein Full House habt, sind die „Three of a kind“ am wichtigsten. Wer hier die höchsten Karten hat, gewinnt die Hand. Bei.
## 3 Of A Kind Poker Navigation menu Video
insane wins!! trips rains!! also 3 of a kind along with 4 of a kind!! 3 card poKer
Ein Straight mit As, König, Dame, Bube,10 ist das höchste Straight ("Ace high"); ein Straight mit 5, 4, 3, 2, As ist das niedrigste Straight ("Five high"). Three of a Kind. 3 of a Kind. DE 3 einer Sorte, Drilling. Hierbei handelt es sich um eine der zehn möglichen Poker-Hand-Kombinationen. Beispiel. Im Kartenspiel Poker beschreibt der Begriff Hand die besten fünf Karten, die ein Spieler nutzen Drilling, im Englischen auch three of a kind oder trips genannt, ist eine Hand wie 2♢ 2♤ 2♥ K♤ 6♤, die drei Karten desselben Wertes und zwei. Die Drillingshand ist die sechsthöchste aller Pokerhände und besteht aus 3 Karten derselben Wertigkeit, die 2 Kicker-Karten entsprechen. Auch Spiele Kostenlos Runterladen Sie beim Texas Hold'em bis zu sieben Karten pro Spielrunde zur Verfügung haben, wird Ihre Hand immer aus Roulette Passe besten fünf dieser sieben Karten gebildet. Ein Paar kann dreizehn Werte und zwei von vier verschiedenen Farben haben. Es ist über den zwei Paaren und unter dem Straight angeordnet. Insgesamt gibt es 10 verschiedene Arten Entenkeulen Sous Vide Händen. Three-of-a-Kind Hand in Poker Three-of-a-Kind is sixth on the list of poker hand rankings. It consists of 3 cards of the same rank or denomination. The words “Three of a Kind” should immediately have you thinking that it consists of some combination of 3 cards – this is what makes it a very easy hand to recognise. Three of a kind hands that differ by suit alone, such as 9 ♠ 9 ♥ 9 ♦ 10 ♦ 8 ♥ and 9 ♣ 9 ♠ 9 ♥ 10 ♦ 8 ♦, are of equal rank. In community card games, such as Texas hold 'em, three of a kind is called a set only when it comprises a pocket pair and a third card on the board. On this page you will find the solution to Poker slang for three of a kind crossword clue crossword clue. This clue was last seen on December 24 on New York Times’s Crossword. If you have any other question or need extra help, please feel free to contact us or use the search box/calendar for any clue. Thanks for visiting our NY Times Crossword Answers page. Crosswords are not simply an entertaining hobby activity according to many scientists. Solving puzzles improves your memory and verbal skills while making you solve problems and focus your thinking. We play New York Times Crossword everyday and when we finish it we publish the answers on Continue reading Poker slang for three of a. What is Three of a kind in Poker? Three of a Kind is made when we hold three cards of identical rank. The strength of our three of a kind is determined by it’s rank value where Aces are high. This leaves room for two kickers in regular poker variants that use 5-card hands.
Three of a Kind is made when we hold three cards of identical rank. The strength of our three of a kind is determined by it’s rank value where Aces are high. This leaves room for two kickers in regular poker variants that use 5-card hands. Three of a Kind beats two-pair but loses to a straight. Three of a Kind is also frequently referred to as a ‘set’ (when made with a pocket-pair) or ‘trips’ (when two of the cards . Three of a Kind is in the bottom half of all poker hands rankings. It is under a Straight and above Two Pairs. The hand can also be ranked based on the denomination of the three matching cards in. Probability of getting 3 of a kind in a 5 card poker hand using combinations. Ask Question Asked 2 months ago. I have a question about finding a probability of 3 of a kind using combinations. This is a solution I originally came up with: $$\frac{\binom{13}{1}*\binom{4}{3}*\binom{12}{1}*\binom{4}{1}*\binom{11}{1}*\binom{4}{1}} {\binom{52}{5}}$$ That is, choose a face value for 3 . Sign up using Email and Password. Quinte flush. Chip Position Pot Playing card Hands Non-standard Tell. Card Games For Dummies. Wikipedia list article. Continuation bet. An early raise will ensure you do Age Of Kings 2 best to eliminate Computerspiele Downloaden low value cards that could form a stronger combination later in the hand. Fundamental theorem of poker Morton's theorem Pot odds Slow play. There are nine categories of hand when using a standard card deckexcept under ace-to-five low rules where straights, flushes and straight flushes are not recognized. Chip dumping. Cote du pot.
### Des Weiteren haben wir auch alle 3 Of A Kind Poker mit Einzahlung fГr dich hier. - Alles zum poker kombinationen
Straight Flush — die zweitstärkste Hand im Poker.
### Hier wird 3 Of A Kind Poker ein einzigartiges Erlebnis 3 Of A Kind Poker. - Inhaltsverzeichnis
Wenn zwei Spieler einen Flush haben, gewinnt der Spieler mit der höchsten Karte den Pot.
I will add that. The first two are different but the last two are the same, combinationwise. Sign up or log in Sign up using Google. Sign up using Facebook.
Sign up using Email and Password. Post as a guest Name. Email Required, but never shown. It ranks below a straight and above two pair.
Each three of a kind is ranked first by the rank of its triplet, then by the rank of its highest-ranking kicker, and finally by the rank of its lowest-ranking kicker.
In community card games, such as Texas hold 'em, three of a kind is called a set only when it comprises a pocket pair and a third card on the board.
Each two pair is ranked first by the rank of its highest-ranking pair, then by the rank of its lowest-ranking pair, and finally by the rank of its kicker.
It ranks below two pair and above high card. Each one pair is ranked first by the rank of its pair, then by the rank of its highest-ranking kicker, then by the rank of its second highest-ranking kicker, and finally by the rank of its lowest-ranking kicker.
Each high card hand is ranked first by the rank of its highest-ranking card, then by the rank of its second highest-ranking card, then by the rank of its third highest-ranking card, then by the rank of its fourth highest-ranking card, and finally by the rank of its lowest-ranking card.
From Wikipedia, the free encyclopedia. Wikipedia list article. For other uses, see Straight flush disambiguation.
Games portal. The Poker Player's Bible. South Africa: Struik Publishers. Poker for Dummies, Mini Edition. United States of America: Wiley Publishing, Inc.
The Theory of Poker. United States of America: Two Plus Two Publishing LLC. Poker Guides International Casinos Casino Reviews Gambling Bonus Guides: Fastest Cashouts , Mac , Linux , Most Fish , New Sites , Poker Site Reviews , Poker Bonuses , Real Money Poker , Poker Deposits , World Series of Poker , Freerolls , Poker Games , Blacklisted Sites , Most Popular , Safe Poker Sites , Poker Forum Poker Games: Omaha , Three Card Poker , Five Card Draw , Mississippi Stud Poker , Pai Gow Poker Mobile: Mobile , iPad , iPhone , Windows Phone , Android , Tablet , UK Mobile Poker , Australia Mobile Poker , Canada Mobile Poker , NZ Mobile Poker.
Casino Reviews : 32 Red , Casino , Casino , All Slots , Betfair Casino , BetVictor , Betway Casino , bgo , Casino. Green , Mummys Gold , Paddy Power Casino , PartyCasino , PokerStars Casino , Roxy Palace , Royal Panda , Royal Vegas , Ruby Fortune , Sky Casino , Slots Magic , Spin Palace , Springbok , Thunderbolt , TitanCasino , Unibet Casino , Virgin , William Hill Casino , Yebo , Yeti Deutsch StarGames , Royal Vegas , Betway , Jackpots in a flash , Royal Panda , , Party Casino.
Online Gambling , Australia Gambling , Canada Gambling , UK Gambling , NZ Gambling UK Sports Betting , Australia Sports Betting , Canada Sports Betting , Deutsche Sportwetten , Ireland Sports Betting , New Zealand Sports Betting , South Africa Sports Betting , BetStars.
Casino Bonuses : Free Spins , No Deposit Bonus , William Hill Casino , Casino. CardsChat is an online poker community of , members in countries.
About Us Contact Us Sitemap. Responsible Gambling. Why more than , poker players have joined CardsChat Quickly improve your game.
BECOME A MEMBER. We recommend sites proven safe and secure with. Poker: PokerStars Poker Real Money UK NZ Canada Canadien India Mobile iPhone Android Paypal Hands Odds Casinos Gambling.
Cash out. Casser une table. Check back. Check behind. Chip count. Chip dumping. Chip race. Chip trick. Click back. Coin flip. Cold deck.
Continuation bet. Cote du pot. Cote implicite. Crazy Pineapple. Crying call. Se coucher. Dealer's Choice. Delayed cbet. Door card. Double bellybuster.
Double chance. Double or Nothing. Double Shootout. Double up. Drawing dead. Droit de chaise. En position. Face up. Family pot. Fancy play syndrom.
Fear equity. Fear factor. Feeler bet. Fit or fold. Both players technically hold three of a kind Tens with an Ace kicker. This leaves us room to incorporate a second kicker as the tie-breaker.
Hand 1 is trip Tens with Ace Queen kickers while hand 2 is trip Tens with Ace King kickers. Hand 2 hence wins the pot since the second kicker is stronger.
Trips is where we only use one of our hole-cards when formulating the three-of-a-kind component of the hand.
A set is where we formulate our three of a kind using a pocket pair in the hole. If we have top trips, we might find ourselves up against the full-house made with a pocket-pair.
In many cases, sets are strong enough for us to be happy about committing postflop when playing in bb cash games. There are exceptions however, especially on board textures where straights and flushes are possible.
Trips needs to be treated with a little more caution, and whether we commit the stacks or not will usually be dependent on whether we have a decent kicker.
Caution is especially advised with bottom and middle set, since they often get novice PLO players into a lot of difficult.
These hands are often not strong enough to stack off with for bb, especially assuming no relevant redraws.
Full House. If the combination is made with a pocket pair and one community card, it is Rheingold Casino Worms GrГјnstadt a "set". Card Games For Dummies. Wikimedia Commons. Three of a Kind.
|
2021-12-07 18:21:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19181473553180695, "perplexity": 13791.665268377143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363405.77/warc/CC-MAIN-20211207170825-20211207200825-00032.warc.gz"}
|
https://tex.stackexchange.com/questions/136423/biblatex-formatting-article-entry
|
# Biblatex: formatting @article-entry
I'm currently writing a thesis and have 2 questions concerning biblatex:
• I would like to skip the "Issue" that is up to now always displayed together with "Year"
• I would like to have "Year" at the end
Here is an example:
\documentclass[]{scrreprt}
\usepackage[ngerman]{babel}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{url}
\usepackage{hyperref}
\usepackage[backend=biber, sorting=none, style=numeric]{biblatex}
\usepackage{filecontents}
\begin{filecontents}{Bibliographie.bib}
@article{PRL.62.59,
title = {Atomic-Resolution Imaging of Close-Packed Metal Surfaces by Scanning Tunneling Microscopy},
author = {Wintterlin, J. and Wiechers, J. and Brune, H. and Gritsch, T. and H\"ofer, H. and Behm, R. J.},
journal = {Phys. Rev. Lett.},
volume = {62},
issue = {1},
pages = {59--62},
year = {1989},
doi = {10.1103/PhysRevLett.62.59},
publisher = {American Physical Society}
}
\end{filecontents}
\begin{document}
This is a test \cite{PRL.62.59}
\printbibliography
\end{document}
Right now it's like this:
J. Wintterlin u. a. “Atomic-Resolution Imaging of Close-Packed Metal Surfaces by Scanning Tunneling Microscopy”. In: Phys. Rev. Lett. 62 (1 1989), S. 59–62. doi: 10 . 1103 / PhysRevLett . 62 . 59. url: http : / / link . aps . org / doi / 10 . 1103 / PhysRevLett.62.59.
And it should be like this:
J. Wintterlin u. a. “Atomic-Resolution Imaging of Close-Packed Metal Surfaces by Scanning Tunneling Microscopy”. In: Phys. Rev. Lett. 62, S. 59–62. doi: 10 . 1103 / PhysRevLett . 62 . 59. url: http : / / link . aps . org / doi / 10 . 1103 / PhysRevLett.62.59 (1989).
Would be great if someone could help. Thanks in advance
• One quick note: I'm pretty sure that you should be using the number field rather than the issue field (which is meant more for things like 'Summer' 'Spring'). If a journal is published four times a year, you usually have some combination of volume and issue, where volume refers to the volume for that year, and issue refers to whether it is the first, second, third, or fourth issue of that particular volume/year.
– jon
Oct 4, 2013 at 15:09
• Hm, thanks for your comment. I want to skip both since the literature I'm using (Science, Nature, PRL, PRB) can mostly be found only by "volume" and "page"... Oct 4, 2013 at 15:25
• Sure, I meant it more as advice on using your entry fields in the way they were meant, not as a solution to your formal output goals. The only time the 'issue' number needs to be included is when the journal restarts each issue at p. 1; usually, the page numbering is continuous throughout the year/vol., but if it isn't, then the issue number really needs to be included. By that token, however, IFF you don't want to see the issue number printed, it is actually safer to not include the issue number for journals where it is not wanted. Otherwise, how do you get it back in when you do need it?
– jon
Oct 4, 2013 at 15:34
• Ok, now I understand... ;) So I will delete all issue-numbers in the .bib-entrys where they are not needed. But in the case I need them it would be nice to have them appear like: 'author': 'title'. 'journal' 'volume'.'issue', 'page'. 'DOI' ('year'). Oct 4, 2013 at 16:01
• Hmm, of course, I should've said, change them to number (read number everywhere I wrote issue in the previous comment; sorry!), and that it is safer to not include a number entry where they are not necessary. Using issue is just wrong in this case at the .bib file level. And, of course, there is no harm in leaving in the number if you don't mind output like volume.number in the ouput (this should be the default). The extra information does no harm when it appears in the right place.
– jon
Oct 4, 2013 at 16:13
It is not clear how you are including all these various types of theses in your .bib file, so I'm skipping that (for now at least). But changing the date location is easily done for article types. Hopefully it gives you enough information for changing other entry types as needed.
\documentclass[]{scrreprt}
\usepackage{filecontents}
\begin{filecontents}{\jobname.bib}
@article{PRL.62.59,
title = {Atomic-Resolution Imaging of Close-Packed Metal Surfaces by Scanning Tunneling Microscopy},
author = {Wintterlin, J. and Wiechers, J. and Brune, H. and Gritsch, T. and H\"ofer, H. and Behm, R. J.},
journal = {Phys. Rev. Lett.},
volume = {62},
number = {1}, % <-- N.B.: *not* issue
pages = {59--62},
year = {1989},
doi = {10.1103/PhysRevLett.62.59},
publisher = {American Physical Society}
}
\end{filecontents}
\usepackage[ngerman]{babel}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{url}
\usepackage{hyperref}
\usepackage[backend=biber, sorting=none, style=numeric]{biblatex}
\renewbibmacro*{journal+issuetitle}{%
\usebibmacro{journal}%
\iffieldundef{series}
{}
{\newunit
\printfield{series}%
\usebibmacro{volume+number+eid}%
% \usebibmacro{issue+date}%
\usebibmacro{issue}%
\newunit}
\DeclareBibliographyDriver{article}{%
\usebibmacro{bibindex}%
\usebibmacro{begentry}%
\usebibmacro{author/translator+others}%
\setunit{\labelnamepunct}\newblock
\usebibmacro{title}%
\newunit
\printlist{language}%
\newunit\newblock
\usebibmacro{byauthor}%
\newunit\newblock
\usebibmacro{bytranslator+others}%
\newunit\newblock
\printfield{version}%
\newunit\newblock
\usebibmacro{in:}%
\usebibmacro{journal+issuetitle}%
\newunit
\usebibmacro{byeditor+others}%
\newunit
\usebibmacro{note+pages}%
\newunit\newblock
\iftoggle{bbx:isbn}
{\printfield{issn}}
{}%
\newunit\newblock
\usebibmacro{doi+eprint+url}%
\newunit\newblock
\setunit{\bibpagerefpunct}\newblock
\usebibmacro{pageref}%
\newunit\newblock
\usebibmacro{related}%
\usebibmacro{finentry}}
\begin{document}
This is a test \cite{PRL.62.59}
\printbibliography
\end{document}
Output (minus italics and small caps, etc.):
J. Wintterlin u. a. “Atomic-Resolution Imaging of Close-Packed Metal Surfaces by Scanning Tunneling Microscopy”. In: Phys. Rev. Lett. 62.1, S. 59–62. doi: 10.1103/PhysRevLett.62.59. url: http://link.aps.org/doi/10.1103/PhysRevLett.62.59 (1989).
• I deleted the thesis-question since it was already answered here link... Oct 7, 2013 at 8:17
|
2022-09-28 19:28:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6774596571922302, "perplexity": 3503.3432081944206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00741.warc.gz"}
|
https://math.stackexchange.com/questions/3120232/right-hand-limit-of-convex-function-at-boundary
|
# right hand limit of convex function at boundary
Denote by $$f$$ a monotonically decreasing, convex function defined on $$[0,\infty)$$ that has a derivative $$f'$$ on $$(0,\infty)$$.
I would like to show that if $$f(0)$$ exists and is finite (and $$\lim_{x \to 0} f(x) = f(0)$$), then the right hand limit $$f_+'(x) = \lim_{h \searrow 0} \frac{f(x+h)-f(x)}{h}$$ exists and is finite at $$0$$, and that $$\lim_{x\to 0} f'(x) = f_+'(0)$$ (in my setting it would be fine to assume that $$f'$$ is continuous (or even differentiable) on $$(0,\infty)$$).
I have so far tried to follow (and then modify) https://proofwiki.org/wiki/Convex_Real_Function_is_Left-Hand_and_Right-Hand_Differentiable and the cited reference (1977: K.G. Binmore: Mathematical Analysis: A Straightforward Approach). In this case it is shown (on the interior of an interval) that $$F_x(h) = \frac{f(x+h)-f(x)}{h}$$ is an increasing function in $$h$$ and hence $$\lim_{h\to 0} F_x(h) = f_+'(x)$$ has to exists. This existence is already unclear to me, and the closest answer to this I could find was mentioned in the question Proof that Right hand and Left hand derivatives always exist for convex functions. where an inequality involving limits based on $$h' < 0$$ is used. In my case I can not reflect around the boundary point and I am hence searching for another way to show the statement.
Any hints or comments are greatly appreciated.
• Consider the function $$f(x) = \begin{cases} -\sqrt{16 - (x - 4)^2} & x\in[0,1]\\1-\sqrt 7 - x& x \in (1,\infty)\end{cases}$$ It is tangent to the $y$-axis. – Paul Sinclair Feb 21 at 1:28
• Thank you for the counter example! Now I have to figure out where I can go from here... – user3456032 Feb 22 at 14:37
|
2019-09-23 05:34:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9283891320228577, "perplexity": 166.04031154373507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576047.85/warc/CC-MAIN-20190923043830-20190923065830-00155.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-the-vertex-and-intercepts-for-y-3-x-2-x-2
|
# How do you find the vertex and intercepts for y = 3(x - 2) (x + 2)?
Aug 2, 2018
$\text{vertex "=(0,-12)," intercepts } x = \pm 2$
#### Explanation:
$\text{to find the x-intercepts set y = 0}$
$3 \left(x - 2\right) \left(x + 2\right) = 0$
$\text{equate each factor to zero and solve for x}$
$x - 2 = 0 \Rightarrow x = 2$
$x + 2 = 0 \Rightarrow x = - 2$
$\text{to find the y-intercept set x = 0}$
$y = 3 \left(- 2\right) \left(+ 2\right) = - 12$
$\text{The vertex lies on the line of symmetry which is}$
$\text{positioned midway between the x-intercepts}$
${x}_{\text{vertex}} = \frac{+ 2 - 2}{2} = 0$
${y}_{\text{vertex}} = 3 \left(- 2\right) \left(+ 2\right) = - 12$
$\textcolor{m a \ge n t a}{\text{vertex }} = \left(0 , - 12\right)$
graph{3(x-2)(x+2) [-40, 40, -20, 20]}
|
2020-07-09 21:48:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8697086572647095, "perplexity": 3696.3477885979787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655901509.58/warc/CC-MAIN-20200709193741-20200709223741-00238.warc.gz"}
|
https://classes.engineering.wustl.edu/ese205/core/index.php?title=Library_Chair&diff=prev&oldid=8322
|
# Difference between revisions of "Library Chair"
## Overview
It's a Wednesday night. You're sitting in your room watching Netflix, putting off studying for your exam next week. You're cozy in bed, and it's cold and dark outside, plus your suite-mate just said he's ordering Domino's and there might be a slice or two left over for you. Every fiber of your being is willing you to stay home, eat, and watch TV, but that nagging voice in the back of your head tells you that you're a Wash U student damnit, and you'd better take this exam seriously! So you muster up some energy and get the kid on your floor who is in the class with you to walk to the library and study for a bit. When you guys get there, though, you see that every seat in the library is taken! ARGH!!
Few things are more annoying than when you decide to actually go study but then can't because there isn't an open spot for you in the library. Enter the Library Chair. In simple terms the Library Chair system consists of an interactive chair and a web interface. The chair can detect when someone is sitting in it, and this information is then relayed (via a few arduinos and a raspberry pi) to a website which displays a map of all the chairs in the library and which ones are occupied. You can check this website from the safety and comfort of your own room; no more cold walks to the library that end up being futile!
For more specific information on how the system operates, please look at the "Design and Solutions" section of this wiki.
## Team Members
Nick Blenko
Tom Howe
Josh Zucker
Our wonderful TA, Mo Wu
And our adviser, Professor Morley
## Objectives
Our group's main objective was to create an interface that library goers could use to see which seats in the library are available, and which are occupied. This interface is dependent on voltage data obtained from a circuit hooked up to an arduino and affixed to the bottom of a library chair. When a human body moves close to the circuit, i.e when someone sits down in the chair, the output voltage changes and chair occupancy can thus be determined. This circuit then communicates occupancy and location data to another arduino connected to a raspberry pi. The raspberry pi will in turn communicate this data to a web server. We aim for the circuit/arduino device to be accurate, low energy, and completely powered by solar panels.
As individuals we are simultaneously working on a few different smaller goals. These include: finalizing the circuit that will detect when someone is sitting in the chair, figuring out how to communicate effectively between the arduinos and how to generate the wave function necessary for the circuit to operate from an arduino, integrating a solar power source into the circuit, communicating data from the raspberry pi to a web server, and designing and implementing an aesthetically pleasing web interface.
## Challenges
• Learning how to use the TI microcontrollers. This proved to be too difficult, so we adapted and used arduinos instead.
• Rebuilding the circuit after someone burned it in the lab
• Getting accurate current and voltage readings from solar panels/batteries
• Needing to find a new ADC solution after realizing the built in arduino ADC was not powerful enough
## Budget
• Chair: Cost Varies - (donated by Professor Morley)
• Raspberry Pi: $35.00 • Solar cells:$8.69
• Resistors/Capacitors/Diodes: (provided by school)
• Wire: (provided by school)
• Web Server: $12/Year • Bluehost Account$3.99/Month
• RF Link Receiver (433 MHz) $4.95 • RF Link Transmitter (433 MHz)$3.95
• 2x Arduino Uno - one provided by school, one bought at link below for $27.95 • External 16 bit ADC converter 14.95 • Rechargeable Batteries$19.98
Total: \$139.44 (Total assumes 1 year subscription to web server and 3 month subscription to bluehost account)
## Design and Solutions
### Module #1: Detect whether someone is in the chair
#### 1.1 Construct RC circuit that outputs variable voltage depending on total capacitance
We first have a regular RC circuit with one metal plate acting as a parallel plate capacitor, where the other plate is effectively the rest of the universe. When a human body comes close to the circuit, its electric field increases the capacitance of this plate. This increased capacitance increases the time constant of the RC circuit, which means that in the same interval of time this altered circuit does not output a voltage as close to its peak voltage as it did before. In other words, when a human body is close to the circuit, it outputs a lower voltage.
Figure 1.1 - The circuit used to detect occupancy based on peak voltage in the circuit.
#### 1.2 Manage voltage flow going into circuit
After prototyping on the NI Elvis, we determined that the best form of input voltage to the circuit would be a 5V square wave with a frequency of around 800kHz and a duty cycle of 50%. (For help prototyping on an NI Elvis board see our tutorial https://classes.engineering.wustl.edu/ese205/core/index.php?title=Use_a_NI_Elvis_board). Since a square wave is always either ‘on’ or ‘off’, this input means that the capacitor is always either charging or discharging, which lets us get more consistent output voltage readings from the circuit. We determined the optimal frequency through trial and error and striking a balance between signal magnitude and signal noise: using a higher input frequency led to a noisier output voltage signal, but using a lower frequency led to less of a change in voltage.
800kHz is an extremely high frequency, though, considering that the arduino uno cpu has a clock speed of 16MHz, so we searched for a library that could be of use and found this: https://forum.arduino.cc/index.php?topic=117425.0 . However, we struggled to implement it correctly, and further testing showed that we would only need a frequency of around 50-70 kHz.
We used pulse width modulation on the arduino's digital pin number 6 (as seen in Figure 1.1) to supply this square wave. The default frequency for pulse width modulation on digital pin 6 is 976.56 Hz, and the default voltage is 5V. In order to increase this frequency, we reduced the PWM prescaler value on the TCCR0B timer register to 1 from a default of 64, thereby increasing the frequency by a factor of 64. (For reference see: https://arduino-info.wikispaces.com/Arduino-PWM-Frequency)
Here is the code we used to generate the 5V 63kHz square wave input:
void setup() {
pinMode(6,OUTPUT); //initializes function generator pin
TCCR0B = TCCR0B & B11111000 | B00000001; // sets timer 0 divisor to 1, freq=62.5kHz
}
void loop() {
analogWrite(6,128); //sets up square wave with duty=50%
}
#### 1.3 Measure voltage flow exiting circuit, and analyze to determine occupancy
We initially tried to use the arduino's built in 10-bit analog-to-digital converter to detect changes in output voltage from the circuit. Using the minimum reference voltage of 1.1V and 1023 'steps' of specificity (2^10 = 1024), gave us a step size of ~1mV. Since we were dealing with changes in voltage (voltage when chair is occupied vs voltage when chair is vacant) on the order of microvolts, we needed another solution. We found a compatible 16-bit external ADC converter (https://www.adafruit.com/product/1085) and wired it up as shown in figure 1.2. This converter has about 65,000 'steps' of specificity (2^16~65000), enabling detection of voltage changes as small as 0.0002V. We wired in a differential connection to the external converter, which is more favorable than a single-ended connection when trying to reduce noise. In order to accurately interpret this very noisy data, we had to implement a multi-tiered rolling average filter to smooth the data. The exact voltage cutoff values signifying either occupancy or vacancy are dependent on the composition and the environment of the chair, and as such need to be tested for and updated whenever the circuit is affixed to a new chair or the chair is moved.
The code to read in voltages using the external converter and then determine vacancy/occupancy using multi-tiered filters can be found here: https://docs.google.com/document/d/1ttIRGPHdjRMQ3Y-W13mnZI5xyS1DVvmGeYv2uDjGT6I/edit?usp=sharing
Figure 1.2 - The external 16-bit analog-to-digital converter used to take voltage readings from the circuit
#### 1.4 Integrate arduino and circuit and attach to chair
We used a solderless breadboard to connect the Arduino to the RC circuit, the external ADC converter, and an RF transmitter. We made these connections as shown in figure 1.1, figure 1.2, and figure 2.1. The circuit-arduino system is shown in figure 1.3. In order to improve the aesthetics of the system, we designed and 3D printed a case (figure 1.4) to cover up all the wires and circuitry. We then screwed this case to the underside of the chair, and stuffed the metal plate from the RC circuit inside the stuffing of the chair in order to get readings most effectively.
Figure 1.3 The circuit/arduino system. Figure 1.4: A rendering of the case for the Arduino and circuit.
#### 1.5 Power the Arduino/circuit with solar power and batteries
In order to get the Arduino powered our plan was to use a system of batteries and solar panels. The batteries would be the main source of power to the Arduino and the solar panels would recharge the batteries. We ended up using 6 1.2V 2400mAh batteries, wired in series, to supply the proper voltage to the circuit along with 7 680 Ohm resistors wired in parallel, to avoid overheating, to supply the proper current to the Arduino. We attached 5 4V 80mA solar panels, wired in parallel to create a 4v 400mA system, to the batteries at the junction between the batteries and Arduino. The positive end of the solar panels was wired with a diode to prevent voltage from the batteries from going into the panels. The way the panels were wired they would provide energy to either the circuit, if it needed it, or to recharging the batteries. There were two main issues with the power. The first issue was the fact that the voltage supplied from the batteries was 7.2V while the voltage supplied by either a computer or socket was 9V, although the Arduino functions fine with 7.2V the values we were getting from the peak detector were different and we did not have the time to find the new values. The second issue with the power was the fact that the solar panels that we got were not powerful enough to recharge the batteries in a timely manner. The current supplied by the panels meant that the batteries would end up draining faster than they could be recharged. In the future more powerful solar panels could prevent this from happening.
Figure 1.5 - The battery and solar panel system used to power the Arduino.
### Module #2: Relay occupancy data to attractive web interface via raspberry pi
#### 2.1 use RF communication to send data from one Arduino to another
In order to relay data between the arduino on the bottom of the chair and the arduino attached to the raspberry pi we purchased a pair of 433Hz transmitter/receiver modules (https://www.sparkfun.com/products/10532 and https://www.sparkfun.com/products/10534). We encoded transmission and reception using the virtualWire library (https://www.pjrc.com/teensy/td_libs_VirtualWire.html). We started out using sample code from http://www.instructables.com/id/RF-315433-MHz-Transmitter-receiver-Module-and-Ardu/ , and for our final code borrowed from http://forum.arduino.cc/index.php?topic=3256.0 as well. We wired up the modules with help from https://www.sparkfun.com/datasheets/RF/KLP_Walkthrough.pdf . The final code we used to transmit and receive occupancy data can be found here https://docs.google.com/document/d/1ttIRGPHdjRMQ3Y-W13mnZI5xyS1DVvmGeYv2uDjGT6I/edit on page 3. We designed a protocol whereby the chair number and the occupancy data relating to that chair can be sent in 1 byte: if the chair is unoccupied we transmit the chair number, and if the chair is occupied we transmit the chair number plus 128.
The wiring of the receiver to the arduino can be seen in Figure 2.2 (and also at https://www.sparkfun.com/datasheets/RF/KLP_Walkthrough.pdf)
Figure 2.1 - The RF Transmitter used to send data from the Arduino on the chair to the Arduino connected to the Raspberry Pi
Figure 2.2 - the wiring of the receiver module
#### 2.2 communicate from the Arduino to the raspberry pi
Communication from the 'receiver' arduino to the raspberry pi was done through a serial connection using code from http://www.instructables.com/id/Raspberry-Pi-Arduino-Serial-Communication/ .
#### 2.3 use data input into the raspberry pi to create a handsome web interface displaying which seats are occupied and which are not
input tom's python code and procedural steps
## Server
• insert screenshots of code and pi connections**
There were a few steps in setting up our Raspberry Pi server that would relay information from the Arduino to the website interface.
• Downloading basic Raspberry Pi code (NOOBS) onto a microchip
• Launching the Raspberry Pi using USB inputs of a keyboard and a monitor
• Operating a VNC viewer to develop code for the Pi and HTML code for the interface side by side
• Creating code on the Pi such that it automatically launches on a computer (in VNC Viewer) if connected only by Micro USB Power Source
• Building code on the Pi server that would be able to automatically rewrite HTML code based on input
• Writing code in the server that would allow for communication with the Arduino
Initial Launch of the Pi
Necessary Materials
• Raspberry Pi 3
• Micro USB (power source)
• Micro SD card
• Monitor
• Mouse
• HDMI Cable
To initially launch the server, we must first download the necessary preliminary coding found on the official Raspberry Pi website. We do so by using our Micro SD card and plugging it into a USB port on a computer with internet access. We then make sure the Micro SD is clear of all information (if reusing a Micro SD card, be sure to clear using formatting protocol explained on website **insert instructions/reference**). We then download the code off of the website, move the downloaded files to USB input, and wait for the information to copy. We then safely eject the Micro SD card and remove it from the USB. NOOBS is now loaded onto the Micro SD Card.
We then connect the Raspberry Pi to its power source by connecting the Micro USB to the Pi and a Computer (any power source would suffice). We connect a keyboard and a mouse to the Raspberry Pi through USB, and connect the Pi to a monitor using an HDMI cord so we can operate the pi. Now that NOOBS is downloaded and the Pi is fully plugged in, we may code on the Pi as if it were a computer.
VNC Viewer and Automatic Launch of the Pi
Steps:
• Identifying Raspberry Pi IP address
• Enabling VNC viewer on the Pi
• Editing the Pi start code to include connecting to the VNC Viewer
Building Pi Code to Automatically Rewrite HTML Code
• Connecting the Pi to Wifi
• Using Beautiful Soup function
• Plugging in location of output in corresponding Pi and HTML code
Communicating with Arduino
Steps:
• Setting up wired connection between Arduino and Pi
• Writing program in Pi **attach code** that allows for Arduino connection
## Interface
To create the interface, we first bought the domain www.librarychair205.com. We then got a third party account at BlueHost, where we could manage our domain. In order to upload code to the domain, it involves creating an FTP account, using Filezilla to upload to that FTP account, using Atom to write code in html so it is compatible with the FTP account, and being able to grab the data from the RaspberryPi server.
Our steps:
• Generate functional HTML code in Atom
• Save the HTML code in index form to be compatible with Firezilla
• Set up the connection between the Firezilla account and the online server's account and IP address
• Upload the FTP file to Bluehost
# Results
By the demo the chair worked. Although there was a lag of 15-30 seconds between the occupancy status of the chair changing and that change being registered on the website that was due to the Low-Pass filter on the Arduino to filter out the noise of the circuit and the time it took for the voltage in the peak detector to change. The lag is not an issue given that someone looking on the website to see if there are any open chairs in the library will take more than 30 seconds to get to the library in the first place.
The first goal that our group failed to accomplish was the use of solar power/batteries to power the circuit. We got the solar panels and batteries hooked up and were able to run the Arduino on the power from the batteries and panels but the values that we were getting from the proximity detector circuit were different as the voltage feeding into the Arduino was different than when it was plugged in. Given more time we would have been able to find the correct range of voltages for the Arduino to be run off of battery power, but as we could not have the Arduino connected to the computer to give us a display of the voltage values while being powered off of the batteries it would have taken a long time to find the correct window.
The other goal that we were not able to accomplish was using the TI board provided by Professor Morley.
Other than the lag and power source there were no issues with the chair, and other than the power source and TI board we met all of our goals. The circuit consistently was able to detect proximity, the Arduino was consistently able to tell what voltages represented occupation and non-occupation and transmit that to the Arduino connected to the Raspberry Pi, the Pi was consistently able to update the server which in turn was constantly able to update the website. Although there were a lot of independent parts in the project, they all came together and worked to produce a consistent working chair.
\
|
2022-11-26 22:10:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1878509670495987, "perplexity": 1684.5653971879717}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446709929.63/warc/CC-MAIN-20221126212945-20221127002945-00636.warc.gz"}
|
https://www.thenakedscientists.com/forum/index.php?topic=65776.0
|
# 4 a deeper discussion: Is distance an absolute invariant?
• 230 Replies
• 10601 Views
0 Members and 1 Guest are viewing this topic.
#### timey
• Neilep Level Member
• 1356
##### 4 a deeper discussion: Is distance an absolute invariant?
« on: 11/02/2016 18:33:44 »
Although the intention of this thread is to remain within the remit of established physics in a quest for a deeper understanding of such, I have placed this discussion in New Theories, so that should we transgress into uncharted territory, we are free to do so if such becomes relevant.
Is distance an absolute invariant? ...It's a great question, point of fact...
In order to further my understanding of how the intertwining of general relativity with special relativity is mechanically derived. I have questions.
The Lorentz transformations.
Are the Lorentz transformations used to calculate the velocity related slowing of time (relative to a 'stationary' observer), and contracting of distance experienced by the moving reference frame?
Are the Lorentz transformations used to calculate the stretching of the fabric of space?
Are the Lorentz transformations used to calculate gravitational time dilation? If not, what is?
And... are the Lorentz transformations used to calculate an observed length contraction?
*
Ok, I am now asking things in certain ways because I intend that you think about the matter from that perspective. Not because I have not understood the subject matter, at least to some degree anyway . I'm now trying to understand how the GR field equations are slung together, and which maths are used in what context.
I am now talking about a reference frame that is moving at a constant velocity in which we find the observed party, with respect to another reference frame, (that we are considering as stationary with respect to the moving reference frame), where we find the observer. The moving reference frame is travelling through changes in a gravitational field, at an angle that affords the observer a view of its length.
: are the length contractions that an observer observes, a factor of the gravitational time contractions experienced by the observed?
: is velocity related time dilation experienced by the observed, a factor of the stretching of distance observed by the observer?
: is the contracting of distance experienced by the observed, a factor of the time dilation experienced by the observed?
: is the stretching of distance observed by the observer, a factor of his reference frames rate of time dilation?
: is the stretching of distance observed by the observer, a factor of his observation of length contraction?
In the other thread of same question:
Space Flow: you say that between galaxies in the voids that there are still gravitational forces at play, although very weak, and that space time is subsequently flattened. A flattening of space time is suggestive that without the effect of curvature, that a distance between 2 gravitational forces will be a 'shorter' distance than if any significant curvature were apparent. The time dilation aspects of faster time, (relative to earth), in these voids, also denotes that a unit of distance will be covered by a constant velocity more quickly. Yet... This flattened space time is being stretched, and the fact of redshift is given as proof of such. Are you able to talk me through the mechanics of the mathematics for these considerations? Can you calculate gravitational redshift?
Alan: you have said: GR simplifies to SR in the case of no acceleration, or no gravitational field... That there is a gravitational field in the voids, and that where these gravitational fields cross points, as such, that they cancel each other out. This being the premiss for wormholes presumably?
You have also given an example of some basic algebra that my level of study assumes that I already know. I'd like to understand how to use: GM/r2 ... my interpretation is that G is the gravitational constant, M is mass, but why a big one in this instance please? r2 is radius squared, but is this a straight line radius distance or a circular radius distance... please? Any help appreciated!
Jeff: I know you have been watching the Susskind lectures too, and I have no doubt that you, as a schooled mathematician, will have understood the subtleties of these complex equations far better than I. If you can answer any of my questions, I'd be delighted. This goes for anyone else who understands the concepts being discussed.
« Last Edit: 11/02/2016 19:43:10 by timey »
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #1 on: 11/02/2016 19:38:48 »
To explain: my interest is in 'distance' rather than length. Distance being the space between things, and length being the distance occupied by matter. Clearly 'a' distance between things that are moving at different speeds relative to each other is variable. But... are we saying that distance itself, empty space between 'things', can be stretched or contracted?
#### Space Flow
• Sr. Member
• 400
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #2 on: 11/02/2016 23:05:12 »
Is distance an absolute invariant
This is indeed a complicated question, and in the way you have gone about asking it, you have one set out your thoughts very clearly, and two you have some assumptions and understandings that are not quite correct.
So lets work through it;
I will start by cleaning up your understanding of what I previously said.
Space Flow: you say that between galaxies in the voids that there are still gravitational forces at play, although very weak, and that space time is subsequently flattened. A flattening of space time is suggestive that without the effect of curvature, that a distance between 2 gravitational forces will be a 'shorter' distance than if any significant curvature were apparent. The time dilation aspects of faster time, (relative to earth), in these voids, also denotes that a unit of distance will be covered by a constant velocity more quickly.
There are no totally empty Voids that we have been able to find. But the density of matter in the Voids is extremely low. By extremely low I don't refer to particles per cubic metre but to clusters, galaxies, stars, etc. Yes there are material structures even in the middle of the biggest Voids that we have looked into. So even though we can talk hypothetically about a flat spacetime, in reality we know of no such place. (Actually the very centre of any massive body should have a small region of totally flat spacetime).
The other thing about the above statement that distances should be shorter in flat spacetime is not correct. Curvature or in my case flow rate, increases density and as such distances are shorter where there is movement. Therefore distances are seen to be longer in flat spacetime and shorter in curved. Anywhere that movement can be observed whatever the cause, brings time dilation and length contraction into the observations. This is not an answer to the overall question you pose but explains what will be observed.
Any unit of distance will always be covered by any kind of velocity more quickly than not moving at all so I don't understand what this is supposed to mean; "The time dilation aspects of faster time, (relative to earth), in these voids, also denotes that a unit of distance will be covered by a constant velocity more quickly".
Who is the observer in that statement. If you are talking about a SR scenario than looking from Earth technically you are not at rest. Even stretching the imagination past reasonable limits this is not a good comparison and will complicate rather than teach us anything.
But I will set that aside and assume that you have found a point in space that is at rest relative to the centre of this Void. Again using SR to describe the situation, from this observation point any movement you observe by anything in the Void will make that mass that is observed to be moving display time dilation and length contraction by an amount defined by the Gamma factor (γ). Gamma is defined by (1 divided by the square root of (1-(v/c)^2).
From the reference frame of any mass in the void, without acceleration of any kind involved, they too can consider themselves to be at rest and anything they observe to be moving will be subject to the same treatment as above.
Any affects due to space flatness or better put degree of curvature is outside the analytical powers of SR.
Yet... This flattened space time is being stretched, and the fact of redshift is given as proof of such.
Any movement attributed to space expansion is attributed to movement of space time and not movement through spacetime. It causes a frequency shift like any observed movement towards and away from an observer will do, but not being movement through spacetime is not subject to Gamma factor adjustment. If all the observed movement is by spacetime itself, it is possible to even exceed light speed with no time dilation or length contraction.
Such movement is calculated from the frequency shift observed and in an expanding universe is always red-shift. In other words the spectral signatures of elements contained in the light that we observe have all been shifted to the longer end of the EM spectrum. The light has been stretched.
The math to work that out is fairly simple because we have the constant speed of light as a reference.
An example would be that you analyze the light from a distant object and you identify known absorption lines, but these lines are not where your lab testing tells you they should be.
If an object is moving towards or away from you, these spectral lines will be
moved in wavelength away from their normal wavelength λ. (The lab wavelength)
If you observe a line at wavelength λo, you can define a redshift z as: z=λo-λ/λ (λo being your observed wavelength).
If you then multiply this by the speed of light per second you will get the speed that your observed object is moving away from you. This is fine as long as the speeds you are observing are well short of the speed of light.
If you are somehow watching something that due to the expansion of the Universe is moving away at relativistic velocity then you have to add the gamma factor to that equation. Even than it is a little more complicated. Luckily this situation does not change the impact on the subject matter of this post so we can leave it at that.
Whether slow or relativistic, this stretching of light due to space expansion is a line of evidence that distance is not an absolute invariant.
Can you calculate gravitational redshift?
This is a subject that deserves its own post.
Under the current curved space interpretation of GR, it requires the use of calculus. Or it would if I was to try and run you through it.
Spaceflow theory on the other hand treats everything as a relative velocity or acceleration between matter and spacetime, and so reduces it to just relative speeds.
I am in the process of working through that maths for inclusion in my own paper and you will be able to view it there once done.
Right lets see how you go digesting this analysis of just the statement in your vast query that is attributed to me, before I try and address the many other points of your post.
We are made of Spacetime; with a sprinkling of Stardust.
Matter tells Spacetime how to Flow; Spacetime tells matter where to go
#### alancalverd
• Global Moderator
• Neilep Level Member
• 4815
• life is too short to drink instant coffee
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #3 on: 11/02/2016 23:29:07 »
Alan: you have said: GR simplifies to SR in the case of no acceleration, or no gravitational field... That there is a gravitational field in the voids, and that where these gravitational fields cross points, as such, that they cancel each other out. This being the premiss for wormholes presumably?
No, just places where the gravitational field isn't merely weak but actually zero. However since all the stuff in the universe is moving around all the time, the zeros are temporary and pretty much unpredictable!
Quote
You have also given an example of some basic algebra that my level of study assumes that I already know. I'd like to understand how to use: GM/r2 ... my interpretation is that G is the gravitational constant, M is mass, but why a big one in this instance please?
We usually write F = GMm/r^2 to denote the force between two masses, and that usually means between a big lump (mother earth, M) and a little one (me, m). So the gravitational field at a distance r from a big lump will be GM/r^2, from which we can calculate the accelerating force on any mass m1, m2, etc that happens to be there.
Quote
r is, classically, the shortest distance between two points in Euclidean space. If there are only two bodies in our universe (I feel a song coming on - it's that time of year!) and our test mass (little m) has no initial velocity, we can ignore all that stuff about conservation of angular momentum and geodesic paths in curved spacetime, and your picture of microgravity in the void of deep space does indeed provide the criteria for simple linear attraction. Of course if there is an initial relative velocity, then conservation of angular momentum will lead to a Specsavers Moment and m will take up a decaying orbit. Yes, folks, sex and physics - better than television any day!
helping to stem the tide of ignorance
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #4 on: 11/02/2016 23:53:45 »
Ok, thanks Space Flow, you've given me a good deal to digest there. Great stuff! My lads hogging my internet connection, but I'll get a good look in at it in the morning, he'll be sound asleep then, no bloody doubt, grrr! But on the bright side, I'll have my phone.
Alan, same as, and thanks for the maths lesson. I'll put that to use straight away in tidying up some formula I've been working on, and btw, you forgot about phychology!
#### Space Flow
• Sr. Member
• 400
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #5 on: 12/02/2016 00:10:23 »
"Although the intention of this thread is to remain within the remit of established physics"
"Ok, I am now asking things in certain ways because I intend that you think about the matter from that perspective".
You are probably going to think I am being difficult, but I can not do both of the above red and blue quotes, with the questions the way you have worded them.
Are the Lorentz transformations used to calculate the velocity related slowing of time (relative to a 'stationary' observer), and contracting of distance experienced by the moving reference frame?
Answer;Yes the Lorentz transformations are what is used to find the applicable gamma factor to any observed reference frame that is moving relative to the observer. The gamma factor gives you the time dilation and length contraction observed in a relatively moving reference frame.
The second part of this question does not make sense
(contracting of distance experienced by the moving reference frame?).
A reference frame that is observed to be moving is observed to undergo time dilation and length contraction.This is from the point of view of the observer, so saying that an observed moving reference frame in any way experiences such things is wrong. That same reference frame has every right under SR to consider itself stationary and it is the other reference frame undergoing the effects of observed speed.
One's own reference frame can not experience these effects.
Are the Lorentz transformations used to calculate the stretching of the fabric of space?
If you mean due to Universal expansion, not as a general rule, as the observed speeds we see so far do not warrant that level of difficulty.
But to answer the question if they describe the situation more accurately than not using them then YES. To clarify that some more, if we were observing expansion at relativistic speeds then we would be forced to take Gamma into consideration. As it is at the moment with the speeds we are dealing with the extra level of computing difficulty is not justified by the extreme fractional difference it would make in our answers.
Are the Lorentz transformations used to calculate gravitational time dilation? If not, what is?
And... are the Lorentz transformations used to calculate an observed length contraction?
dTo explain: my interest is in 'distance' rather than length. Distance being the space between things, and length being the distance occupied by matter. Clearly 'a' distance between things that are moving at different speeds relative to each other is variable. But... are we saying that distance itself, empty space between 'things', can be stretched or contracted?
Answer; I do not quite understand the distinction you make between distance and length.
Things as you call them are composed of space with an extremely small fraction of impurities within this space we call matter. Even then the representative matter is said to be considered as point particles so really we are talking about space when we say distance or length.
Whatever the space we are observing does, then the matter within that space is observed to do the same thing. Check out the latest announcement of the discovery of gravitational waves to get a better understanding of what happens to matter when the space that contains it gets distorted in any way.
So the correct answer to your question is that all these relativistic effects are observed to happen to the spacetime and the matter contained by that spacetime just conforms to the observed shape and rhythm of that spacetime.
With the exception of the recently discovered Gravitational waves, we have no other way of directly observing spacetime. So we use the observation of matter to show us what spacetime is seen to be doing. Matter is the proxy for what we observe to be happening at the coordinates it occupies.
I don't know if you have noticed but not once in all my answers have I even hinted that any of those effects actually happen. Only that they are observed to happen. So in a sense your main question remains unanswered.
The only thing is the recent discovery of Gravitational waves can be taken as a direct observation of spacetime distance or length changing.
Everything else until now depended on the understanding of GR. These gravitational waves are a direct observation of spacetime varying in shape and size.
If we had observed such an effect without GR to explain it, we would have been forced to imediately invent a theory to account for a spacetime that can stretch and compress, as that is exactly what direct observation showed to have happened. Any theory that claim's otherwise will have a very hard time explaining this data.
So LIGO and VIRGO have answered this question for us. Distance is most certainly not an invariant of any sort let alone absolute.
« Last Edit: 12/02/2016 00:59:47 by Space Flow »
We are made of Spacetime; with a sprinkling of Stardust.
Matter tells Spacetime how to Flow; Spacetime tells matter where to go
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #6 on: 12/02/2016 01:28:38 »
Space Flow - thanks, most of your answers are as I anticipated, the maths will take me some time. I'll get back to you on the majority of this post when I've given it some further thought, and also when I have read your link. One thing does stick out though. You say you do not understand how the constant velocity of light travelling through quicker rates of time will cover units of distance faster than it does relative to travelling units of distance in a reduced rate of time?
Yes - gravity waves. You may well find it amusing to see in the gravity waves thread, that my 'visual' of the premiss of the experiment has morphed several times now.
Given that we are here in New Theories, I can speak more freely. And please appreciate that I am stuck here with a very small and rather cracked iPhone screen for investigating my interests, but a hugely vivid imagination...
First I imagined that the light was sent along tubes at an angle to the tube itself, where the mirrors lined the tube and the light bounced around inside to a measuring point and a gravitational 'shift' in the light was detectable.
Then Ethos said no, and told me that the light is split into a beam down 2 tubes aligned in a V angle with mirrors at the ends, the idea being that the distance between the mirrors was going to be changed by the gravitational wave. This, I imagined was going to a more closed up V shape...
Then Alan dropped the subtle hint that it is the length of the arms of the V that are supposed to be affected, which is when the experiment became vertically oriented in my mind.
There are many roads to Rome, aye?
Never-the-less, I do think that my notion of the light being exposed to an increase in gravitational energy causing a gravitational shift in the light itself is valid, providing, as Ethos pointed out, such a shift could be measured. But if they are expecting the length of the arms to be affected by the gravitational wave, and the equivalence principle holds, then light will also be gravitationally shifted. The distance of the arms will be measured by how much time it takes the light to return, but we must also consider that the rate of time will also be affected by a gravitational wave. An increase in the gravitational field will 'slow' time down according to GR. The constant speed of light travelling the 4 kilometre arms in a slower rate of time will result in the appearance of a longer distance. Vertically oriented arms will be already dealing with changes in the gravitational field in any case. In fact, in my mind, a horizontally oriented experiment would have the benefit of being oriented in a consistent gravitational field.
I daresay you might tell me now that all three of my visualisations of the experiment are wrong, and offer me a fourth! Lol!
#### Ethos_
• Neilep Level Member
• 1281
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #7 on: 12/02/2016 03:19:11 »
This, I imagined was going to a more closed up V shape...
Then Alan dropped the subtle hint that it is the length of the arms of the V that are supposed to be affected, which is when the experiment became vertically oriented in my mind.
Actually timey, I think the layout is more like a Capital L and the arms lay horizontal to the earth. An arm of 4 kilometers in height would be higher than any man made structure we currently have built on earth. The pictures I've seen have the tubes laying flat to the earths surface.
"The more things change, the more they remain the same."
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #8 on: 12/02/2016 03:41:59 »
Well, yes... This is why I asked if the tubes might be partially set into the ground, which would be an interesting scenario in itself, with regards to measuring time dilation, although in consideration of 4 km, equally as impossible
I wonder how the arms themselves are supposed to be length affected? I can visualise the possibility if they are vertically aligned, but horizontally? The mind boggles... Is it a directional thing do you suppose? North and East maybe...?
It does make more sense that they are horizontal from the point of view of a more consistent gravitational field though.
#### Ethos_
• Neilep Level Member
• 1281
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #9 on: 12/02/2016 03:58:59 »
It does make more sense that they are horizontal from the point of view of a more consistent gravitational field though.
Of course, and lying at right angles to the event would expose a much greater area for the waves to influence.
I've also been doing some more investigation and have discovered that it is indeed the cycling of shortening and lengthening of the tubes that LIGO has observed leading to the verification of gravitational waves.
They compare the measuring of a protons diameter with the cycling of it's diameter from spherical to elliptical. By which standard that comparison is achieved, I have yet to found a detailed description of how the experiment was done.
Interesting times we live in my friend, interesting times................
« Last Edit: 12/02/2016 04:20:28 by Ethos_ »
"The more things change, the more they remain the same."
#### Colin2B
• Global Moderator
• Neilep Level Member
• 2029
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #10 on: 12/02/2016 09:24:31 »
When I looked at the experiment report last yr it looked like each arm had 2 test masses, one at at each end of the tube, which are connected to the mirrors. When a wave passes over, the masses will be displaced in/out (the wave isn't like a water wave, it appears as a flexion). I assume the arms are set at rt angles because they don't know which direction the waves will come from and with only one arm a wave passing // to the light beam wouldn't cause any flexion.
and the misguided shall lead the gullible,
the feebleminded have inherited the earth.
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #11 on: 12/02/2016 12:19:01 »
Ok, Space Flow - I have a scenario whereby we can very simply, 'if you do the math' , check this gravity waves experiment out from a different angle, and I believe that you will either find these results very interesting indeed... or you will prove me wrong.
You say the Lorentz transformations equate length contraction. As far as I understand, in using the Lorentz transformations to equate length contraction, that this system of mathematics 'already takes into account' the factor of the 'lengths' experience of a slowing of its time... That this equation is a melding of both these concepts and gives both results.
Now then, if you would, please just humour me here a bit...
I am now switching the polarities of the 'time' consideration. An increase in the gravitational field is now 'increasing' the rate of time. The constant speed of light is now taking a 'shorter' amount of time to cover the 4km distance.
By taking the distance by which the original 4km distance has shrunk according to 'length contraction', and juggling the differences with the gravitational constant, (it's not quite clear to me the mathematical mechanics of how to 'juggle', but it will come to me), that these will divide down to the constants of square root 2, and 0.41+bunch other numbers.
This mathematical description, if it pans out, will tell you that 'length' or 'distance' IS an absolute invariant, and that it is the 'time' considerations that are the variable, as we have always thought, but just not occurring in the manner that physics has been thinking for all these years.
If I am right, then the Lorentz transformations them self can also be equated. It should be possible to add up 2 of the constants used in this equation , (I think, scratches head), to arrive at the constant of square root 2, and further 'juggling' should reveal the constant of 0.41+lots other numbers.
On the other hand, I might just be completely bonkers.
#### puppypower
• Hero Member
• 573
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #12 on: 12/02/2016 13:50:36 »
This topic comes down to understanding Special Relativity in terms of Energy Conservation. Energy Conservation is one of the few laws of Physics. This law has to be enforced even for Special Relativity. The confusion for most students is connected to skating around the law. Energy Conservation is only possible with preferred reference. Relative reference can and will violate energy conservation; breaks the law.
Let us look at Doppler shift and Energy Conservation. Consider a fog horn and ships coming and going from the harbor. The fog horn sounds with a given frequency and uses X kilowatts of energy to operate the horn. The ships coming into the harbor will hear the pitch go up, while the ships leaving the harbor will hear the pitch go down. This relative motion changes the sound waves for the observer. However, this does not impact the reality of the fog horn, such as its electric bill.
The Doppler shift, observed as ship come and go, does not alter the sound source at the level of its mass, its size, its geometry and its energy usage. What I observed on the moving ship is how my motion impacts the sounds waves that came from the fog horn after it leave the horn. If I extrapolate these changes back to the source and alter the source, it is all an illusion. The fog horn did not change for me. This assumption is egocentric and assumes I am the center of the universe and all hail me.
My motion has nothing to do with the fog horn operation. The fog horn by being the source of the wave is the standard reference if an energy balance is important. One way to test this is look at the fog horn energy bill and see if my driving past the fog horn all day long makes its electric bill it go up or down. Nothing will happen accept expectations.
The analogy is if I wear red sunglasses, the entire world will appear to have red tones from my reference. The world does not change. That would be an illusion. This illusion might be instituted among my peers, if I assume reference is relative. If reference is relative, why not pick my references as the standard? It does not matter if plants are actually green, since it is all relative to my red reference. This will require we do away with common sense and the energy balance.
Instead of a fog horn, say have an energy horn, similar to a fog horn. This is used to warn star ships of asteroids, plasma arcs and other space travel hazards. This energy horn emits energy of a given spectrum defined by Star Fleet. The star ships coming toward the energy horn will see a blue shift in that spectrum, while the star ships going away will see a red shift in that spectrum. These star ship observations do not impact the energy horn in term of its tangible reality; its electric bill. All that happens is the energy signal coming from the energy horn is altered by our motion, for our reference, as defined by Special relativity. This does not alter the source. If it did alter the source, coming and going would see a different energy balance in the energy horn; violate energy conservation.
If the energy horn was in motion; asteroid crawler, sounding off as it finds dangerous things, since its motion implied actually added propulsion energy; kinetic energy, relativistic (kinetic) energy will be added to its mass/energy balance. This uses a variation of kinetic energy extrapolated by Einstein into the equations of special relativity. This will result in changes in the asteroid crawler defined by special relativity. These changes are real and are connected to an energy balance. The energy came from propulsion and not red sun glasses.
If the star ship observer is in motion, it will see its own internal special relativity affects, connected to its own relativistic kinetic energy; energy balance due to propulsion. Ships of the same mass coming and going, at the same speed, will have the same internal changes, since both have the same kinetic and internal energy. The V2 term of kinetic energy makes coming and going the same since the square of plus and minus is both 1. Kinetic energy is a scalar.
Each ship; coming and going, may will see a different Doppler shift in the energy horn; red in glasses. These two separate changes; internal energy and external Doppler shift, may appear to create different relative energy in the asteroid horn. But the actual energy of the energy horn is connected to its own energy usage and its own mass velocity.
To summarize Doppler shift will make external energy appear to change, relative to the observation reference. However, this does not affect the source. This is red sun glasses. If the source is moving based on kinetic energy, there will be internal changes in the sources based on SR. This is independent of any observer.
If I am stationary and the source moves at V, or if I am moving at V and the source is stationary, the energy balance can be different, even if the Doppler shift is the same. If my reference and the source reference have two different masses, we get two different kinetic energies based on who is moving. Relative reference only applies to the Doppler shift, but it does not apply to an energy balance. Energy balance always implies a preferred reference, since Conservation of Energy will be made void by relative reference. If I move at V or the moon moves at V relative to me, each will require different energies, even if both give the same Doppler shift.
Dark energy appears to be an artifact of assuming relative reference applies to energy conservation by making the red shift define the universe. Lost energy is appearing in other ways, through data inferences.
Special Relativity is easy is you assume Doppler shift and Energy Conservation are two separate things that are not connected. The first is relative and the second needs to be absolute or else Energy Conservation is violated; breaks the law.
« Last Edit: 12/02/2016 13:55:47 by puppypower »
#### Colin2B
• Global Moderator
• Neilep Level Member
• 2029
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #13 on: 12/02/2016 17:26:37 »
I was about to start a new thread from a comment you made in "is distance an absolute invariant" because I thought the point you were making has got lost in the noise that that thread has become. I don't usually quote wholesale, but here it is as a starter, seeing as this seems to be the data gathering/scoping point in the thread. If it isn't relevant to the discussion I'm happy to delete it.
Colin. No problem, in fact I think my post was just a symptom of my frustration at my inability to find anyone willing to undertake a 'progressive' discussion with me regarding GR.
When taking on board the difference between a length and a distance, by the remit of SR, a length in a reference frame that is accelerated relative to another, will appear contracted to the observer in the non-accelerated reference frame. The observer on the length in the accelerated reference frame does not experience a contraction of his crafts length, and will instead experience a contracting of the distance he is travelling relative to what the observer in the non-accelerated reference frame observes of the lengths accelerated reference frames journey.
Finally, the lengths accelerated reference frames rate of time is running slower relative to the non accelerated frames rate of time.
Dispensing with the SR considerations for a moment, the observer in the non-accelerated reference frame is also viewing the length and its accelerated reference frame travelling through changes in the gravitational field. These changes in the gravitational field also elicit changes in the rate of time that a clock runs at. We have tested this theory by placing clocks in all manner of elevation, and measuring by how much faster they run relative to a clock at ground level. (NIST atomic clock ground level relativity experiments 2010)... Even back in Einstein's day, it was known that a pendulum has a shorter swing up a mountain, than in the valley.
***Therefore, and based upon this sole observation I do believe, it has been decided that a gravity field slows time down. And that the rate of time runs faster out in space.***
So the observer in the non-accelerated reference frame, observing the accelerated reference frame is also viewing the length in the accelerated reference frame travelling through a gravitationally induced change, or changes, in the rate of time of its locality.
According to GR, if light travels at the speed of light across units of distance experiencing local changes in the gravitational field, and therefore is experiencing changes in the rate of time over these units of distance experiencing changes in the gravitational field, and GR does not take these local changes in the rate of time into account, then distance does indeed become a variable. It stretches!
Clearly the GR field equation's do also include these changes in the local rate of time into the mix to account for this stretching of distance that would otherwise occur.
Space Flow: I notice that you have a notion that these distortions 'may' be a factor of our viewpoint. I agree! If you think about rates of time that are occurring faster, or slower, relative to our own, it could be that we quite simply are observing a lesser percentage of the light from the local of that reference frame as a result.
My initial thoughts to get discussion rolling which I think are relevant to the new thread you have started:
.............. We have tested this theory by placing clocks in all manner of elevation, and measuring by how much faster they run relative to a clock at ground level. (NIST atomic clock ground level relativity experiments 2010)...
Along with Gravity probes A & B - both SR & GR appear to be consistent with the experiments.
Even back in Einstein's day, it was known that a pendulum has a shorter swing up a mountain, than in the valley.
The effect was known in Huygens' time - late 1600s, he derived the formula for pendulum under influence of gravity and used it to calc value of g in Paris. This effect is much larger than that due to GR.
......Therefore, ... and based upon this sole observation I do believe, it has been decided that a gravity field slows time down. And that the rate of time runs faster out in space
The only thing I would add is that for the local observer in space, time, distance, etc, appear to be unchanged.
Space Flow: I notice that you have a notion that these distortions 'may' be a factor of our viewpoint. I agree! If you think about rates of time that are occurring faster, or slower, relative to our own, it could be that we quite simply are observing a lesser percentage of the light from the local of that reference frame as a result.
Not sure what you mean by a lesser percentage of light
Although the 'distortions' seem to be a factor of our viewpoint they also appear to be quite real. Experiments show that the time dilation remains after a clock is returned to the laboratory frame. This seems to indicate that the journey took a shorter route through time, so to speak. (Note : I'm not sure about calling time a distance as GR uses ct (a distance) to represent distance light travels between events.)
That the moving object is no longer contracted could be just the result of coming back to our frame - if we are standing on the longer road, we can't measure it as shorter. We are used to 2 travellers taking different routes to the same point and taking different times.
I view Lorentz transforms for time and length as being 2 sides of same coin, which I think is the view you are coming to.
What does intrigue me, and I haven't resolved how to think about this, is that if clocks at different heights in a gravity field are showing different times between events, it implies a difference in the flow of time at those points, like different flow rates in a river. Is this a result of gravity or a cause of the motion we attribute to gravity? Also, it is important to consider whether the movement is across different heights, or parallel to. How do we resolve this with respect to acceleration, where we might see the acceleration as 'causing' the gravity effect.
PS the formula for gravity time dilation is
image1.png
Where tf is measured at an infinite distance from the mass.
and the misguided shall lead the gullible,
the feebleminded have inherited the earth.
#### Colin2B
• Global Moderator
• Neilep Level Member
• 2029
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #14 on: 12/02/2016 17:27:52 »
I was about to start a new thread from a comment you made in "is distance an absolute invariant" because I thought the point you were making has got lost in the noise that that thread has become. I don't usually quote wholesale, but here it is as a starter, seeing as this seems to be the data gathering/scoping point in the thread. If it isn't relevant to the discussion I'm happy to delete it.
Colin. No problem, in fact I think my post was just a symptom of my frustration at my inability to find anyone willing to undertake a 'progressive' discussion with me regarding GR.
When taking on board the difference between a length and a distance, by the remit of SR, a length in a reference frame that is accelerated relative to another, will appear contracted to the observer in the non-accelerated reference frame. The observer on the length in the accelerated reference frame does not experience a contraction of his crafts length, and will instead experience a contracting of the distance he is travelling relative to what the observer in the non-accelerated reference frame observes of the lengths accelerated reference frames journey.
Finally, the lengths accelerated reference frames rate of time is running slower relative to the non accelerated frames rate of time.
Dispensing with the SR considerations for a moment, the observer in the non-accelerated reference frame is also viewing the length and its accelerated reference frame travelling through changes in the gravitational field. These changes in the gravitational field also elicit changes in the rate of time that a clock runs at. We have tested this theory by placing clocks in all manner of elevation, and measuring by how much faster they run relative to a clock at ground level. (NIST atomic clock ground level relativity experiments 2010)... Even back in Einstein's day, it was known that a pendulum has a shorter swing up a mountain, than in the valley.
***Therefore, and based upon this sole observation I do believe, it has been decided that a gravity field slows time down. And that the rate of time runs faster out in space.***
So the observer in the non-accelerated reference frame, observing the accelerated reference frame is also viewing the length in the accelerated reference frame travelling through a gravitationally induced change, or changes, in the rate of time of its locality.
According to GR, if light travels at the speed of light across units of distance experiencing local changes in the gravitational field, and therefore is experiencing changes in the rate of time over these units of distance experiencing changes in the gravitational field, and GR does not take these local changes in the rate of time into account, then distance does indeed become a variable. It stretches!
Clearly the GR field equation's do also include these changes in the local rate of time into the mix to account for this stretching of distance that would otherwise occur.
Space Flow: I notice that you have a notion that these distortions 'may' be a factor of our viewpoint. I agree! If you think about rates of time that are occurring faster, or slower, relative to our own, it could be that we quite simply are observing a lesser percentage of the light from the local of that reference frame as a result.
My initial thoughts to get discussion rolling which I think are relevant to the new thread you have started:
.............. We have tested this theory by placing clocks in all manner of elevation, and measuring by how much faster they run relative to a clock at ground level. (NIST atomic clock ground level relativity experiments 2010)...
Along with Gravity probes A & B - both SR & GR appear to be consistent with the experiments.
Even back in Einstein's day, it was known that a pendulum has a shorter swing up a mountain, than in the valley.
The effect was known in Huygens' time - late 1600s, he derived the formula for pendulum under influence of gravity and used it to calc value of g in Paris. This effect is much larger than that due to GR.
......Therefore, ... and based upon this sole observation I do believe, it has been decided that a gravity field slows time down. And that the rate of time runs faster out in space
The only thing I would add is that for the local observer in space, time, distance, etc, appear to be unchanged.
Space Flow: I notice that you have a notion that these distortions 'may' be a factor of our viewpoint. I agree! If you think about rates of time that are occurring faster, or slower, relative to our own, it could be that we quite simply are observing a lesser percentage of the light from the local of that reference frame as a result.
Not sure what you mean by a lesser percentage of light
Although the 'distortions' seem to be a factor of our viewpoint they also appear to be quite real. Experiments show that the time dilation remains after a clock is returned to the laboratory frame. This seems to indicate that the journey took a shorter route through time, so to speak. (Note : I'm not sure about calling time a distance as GR uses ct (a distance) to represent distance light travels between events.)
That the moving object is no longer contracted could be just the result of coming back to our frame - if we are standing on the longer road, we can't measure it as shorter. We are used to 2 travellers taking different routes to the same point and taking different times.
I view Lorentz transforms for time and length as being 2 sides of same coin, which I think is the view you are coming to.
What does intrigue me, and I haven't resolved how to think about this, is that if clocks at different heights in a gravity field are showing different times between events, it implies a difference in the flow of time at those points, like different flow rates in a river. Is this a result of gravity or a cause of the motion we attribute to gravity? Also, it is important to consider whether the movement is across different heights, or parallel to. How do we resolve this with respect to acceleration, where we might see the acceleration as 'causing' the gravity effect.
PS the formula for gravity time dilation is
t0=tf√(1-2GM/rc2)
Where tf is measured at an infinite distance from the mass.
« Last Edit: 12/02/2016 18:03:23 by Colin2B »
and the misguided shall lead the gullible,
the feebleminded have inherited the earth.
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #15 on: 12/02/2016 18:55:00 »
Well Colin, don't delete, but I think we have moved on from there a bit since then.
Please note the Lorentz transformation considerations in my last post, in relation to switching the polarities of the time consideration and the gravitational 'shift' in light, this being of relevance to the current LIGO gravity wave length contraction experiment. I am waiting with bated breath to hear if my mathematical suggestions pan out!
What is relevant at this point in the discussion is a pendulum having a shorter swing at elevation. Yes - you are correct, this is a far greater effect than GR time dilation... but this is not the point. A pendulum is, and always has been, associated with time keeping. A shorter swing means faster time. I believe that this alone is the premiss for believing that clocks tick faster in elevation. Of course, it would be a natural progression to 'assume' that time is running at a faster rate in space. That a clock will tick faster at elevation. And, low and behold they do. The Lorentz transformations make a perfect description of this concept, in relation to parallax distances and the speed of light. The consequence of this equation is that length, or distance, is variable.
However, just because a body of mass experiences an increase in its rate of time at elevation, this does not mean that the rate of time for that location of space runs at that rate of time. Indeed, for massless light, we see its rate of frequency reduce as it travels out into empty space. If you can view lights frequency as an indication of the rate of time in that location, it becomes apparent that the length of the wave is not distance related, but time related, that time is running at a vastly slower rates in vastly reduced gravity fields and all parallax distances are shorter than we currently believe. By 22.24%, or to be mathematically precise, d/square root 2.
This line of thinking renders the phenomenon of time as being 'energy' related. An increase in gravitational energy, increases the rate of time.
#### Space Flow
• Sr. Member
• 400
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #16 on: 12/02/2016 20:25:26 »
MY my my...
Go away for a bit and the whole conversation travels at the speed of light in my absence.
It appears quite a bit has been said that I will have to read and understand before commenting on any of it.
I am here to offer timey some easy math for working out how time dilation and length contraction are related to gravity, and those effects calculated at different radii. (heights)
No calculus involved as I have extrapolated from the premises of Spaceflow theory. All this is dependent on relative speeds so can just be treated with high school algebra. Any radius (height) can be treated as a situation in SR so simplifying things.
Using the equations which are not new in this manner will give you equivalent free space travel velocities velocities for any point in any gravitational well. This way any two can be directly compared to each other.
Anyway here it is:
Spacetime Flow Rates
Definition of Gravity; See acceleration.
Definition of acceleration;
A rate of changing Geodesic. Measured as relative change rate of speed between matter and spacetime and defined as a pressure in newtons. (Weight)
There are several ways to achieve a true accelerated state.
One is to jump on a spaceship of some sort and in free space constantly accelerate past your surrounding spacetime.
Two you could stand on the dense surface of a massive object and let spacetime accelerate past you towards the centre of gravity. You could also stand on a rocket sled and accelerate at 1g 1 metre above sea level and as such view yourself as accelerating past a static spacetime (even though you are static relative to anything around you constituted of particle based matter). Either way it is describing an equivalent situation.
When this situation is achieved by maintaining a radius from a centre of Mass without angular momentum (Hover), the speed of the spaceflow past a thus otherwise stationary seeming observer, is given by; “V=sqrt((2GM)/r)” in m/s towards the centre of Mass. If V (Spaceflow velocity) ever equals “c” the speed of light, you have reached the Schwarschild radius of a Black Star (hole). Not that such an animal as a non-rotating Black Star is possible.
Where; V is the flow velocity of spacetime.
G is the Flow-rate Constant (6.67e-11 m^3/Kg/s^2) (Formally the gravitational constant)
M is the amount of Mass in Kgs inside a sphere described by the radius.
And r is the distance from the centre of mass in metres. (Radius)
This describes velocities in an accelerated frame and the rate of acceleration is still given by Newton’s;- A=GM/r^2 Where A is acceleration.
The so called centripetal force now supplied by the inflowing spacetime is still given by; f=(MV^2)/r and also still equals GMm/r^2
Orbital velocities are still calculated by; Vo=sqrt((GM)/r)
Where Vo is the orbital speed needed to stay Geodesic (Free Fall) at a known radius.
NOTE: Atmospheric drag and the effects of spinning bodies have been totally ignored in this treatment, as it would add a level of complexity without significantly helping in communicating the concept or significantly changing the results until you start dealing with ultracompact objects with high spin rates.
« Last Edit: 17/02/2016 05:19:01 by Space Flow »
We are made of Spacetime; with a sprinkling of Stardust.
Matter tells Spacetime how to Flow; Spacetime tells matter where to go
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #17 on: 12/02/2016 22:03:36 »
Well Space Flow, it would seem that your theory and mine are at complete and total cross purposes with each other (chuckle). No matter... I consider alternative physics theories as synonymous to a lottery ticket, that doesn't cost money, is much more entertaining, but shares the same probability issues in being a winner!
One observation, my theory is a damn site easier to disprove than yours...lol!
#### jeffreyH
• Global Moderator
• Neilep Level Member
• 4068
• The graviton sucks
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #18 on: 12/02/2016 22:24:52 »
Consider a traveler who is heading towards a world 20 light years away at a speed that dilates his time so that the journey appears to take only 1 year from his point of view. If he now calculates his journey time with that of the speed of light, with distance data he had calculated in advance, he finds the journey took 1/20 of the time light would take to travel the same distance. If he knew nothing of time dilation he would assume he had traveled faster than light. Yet time has moved on without him to compensate for his misconception. Relativity is THAT strange.
#### Space Flow
• Sr. Member
• 400
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #19 on: 12/02/2016 22:33:02 »
Well Space Flow, it would seem that your theory and mine are at complete and total cross purposes with each other (chuckle). No matter... I consider alternative physics theories as synonymous to a lottery ticket, that doesn't cost money, is much more entertaining, but shares the same probability issues in being a winner!
One observation, my theory is a damn site easier to disprove than yours...lol!
Well you see dear lady, I don't aim to throw out two and a half thousand years of accumulated knowledge about the way things work.
I have no problems with GR as such. Just the way it's been interpreted for the last hundred years.
You will find that if you just change perspective, all the observations, experimental evidence, and maths, suports my view even better than the same supports the curved space view.
As I said it is not a big change, but I think an important one. There is no reason to throw out Einsteins great work, just redefine it so we have an avenue to move forward.
That does not necessarily mean the direction of your thinking is necessarily wrong, but you may have guessed by now by my comments on these posts, that I have a couple of problems with your point of view.
Several of your hypothesis do not fit the observed facts.
But maybe after these conversations we have been having, you might want to revise what you sent me. Maybe not!!!
We are made of Spacetime; with a sprinkling of Stardust.
Matter tells Spacetime how to Flow; Spacetime tells matter where to go
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #20 on: 12/02/2016 23:38:46 »
Space Flow - I too respect the work of Einstein and other greats, too many to mention. I recognise what you have done, are doing, and can see that it is a different way of equating relativity. You haven't made it clear what the benefits of this way of doing things are though, and I'd be interested to hear!
My theory does not in the least throw any of the work done by the greats of science aside. Nor does it start from scratch. If my theory is going to be doing anything at-all, it will be to mesh all this great work together. It quite simply states that one mistake has been made, and in rectifying that mistake, not only do we arrive at the 'observed' universe, albeit by a different mathematical process that affords us an absolute reference frame, but we also get a cyclic universe, and we get it without adding any unobserved quantities into our universe. No dark matter. No dark energy. I do realise though that it is a bit of a paradigm leap for the mind that is trained in relativity, to take on board that GR time dilation is perhaps just a mass near mass phenomenon, that black holes are full of energy, where time runs extremely fast, while the slowing of time that a traveller experiences in space is because time runs slow in space. I do not understand where you have a problem with observation fitting the theory, probably because you didn't say.
However, if one understands the premiss or not, this does not really have any bearing upon picking apart the Lorentz transformations in search of the constants square root 2 and 0.41+other numbers.
I'm saying that the contraction of the tubes in the gravity waves experiment is caused by a time 'contraction', not a length contraction. That the Lorentz transformations are allowing for a slowing of time within the equation, and in the event that time is actually going quicker, the resulting length contraction 'measure' of contraction length, is in fact not a length contraction, but a time contraction.
Now you have to realise that I am doing all of this in my head, but I think, (rubs chin) that the fact of the allowing for a slowing of time in the equation needs to be added to the measure of the length of the contraction, subtracted from the distance, and if you divide all the aspects by each other, 2 of the results will be the constants square root 2, and 0.41+other numbers.
Now then, I cannot do this myself on this iPhone screen with a free internet scientific calculator. Not without going to specsavers anyway!
Any help from any mathematician appreciated, I'd be really happy to be proved wrong...
#### Colin2B
• Global Moderator
• Neilep Level Member
• 2029
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #21 on: 12/02/2016 23:41:16 »
What is relevant at this point in the discussion is a pendulum having a shorter swing at elevation. Yes - you are correct, this is a far greater effect than GR time dilation... but this is not the point. A pendulum is, and always has been, associated with time keeping. A shorter swing means faster time. I believe that this alone is the premiss for believing that clocks tick faster in elevation.
But a pendulum clock loses time at elevation unless corrected, that is what intrigued me when you included it. The period of a pendulum is inversely proportional to √g, so each swing takes longer and clock slows down. Bit of a red herring really but best to get it out of the discussion.
Edit: reading through this thread, am I right in thinking you are proposing that the effect of gravity on time is the opposite of what current GR says, or have I misread. Not sure I have grasped your ideas properly.
« Last Edit: 12/02/2016 23:52:55 by Colin2B »
and the misguided shall lead the gullible,
the feebleminded have inherited the earth.
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #22 on: 13/02/2016 00:18:33 »
Yes Colin, actually I did make a mistake in what I said there about faster time. But...take that lateral thinking a couple of steps further. A clock loses time. A second losing time. A shorter second...
Maybe it's just me, and it matters not how they got there really, they did, and clocks do tick faster in elevation. Clocks are mass. GR time dilation 'could' be just a mass near mass phenomenon, and the phenomenon of time 'could' be caused by energy, with the rate of time set at stopped in a zero gravity field. If you add up inherent mass energy with 'space energy' (synonymous to inherent mass energy), gravity potential, and subtract 0.5mv^2 energy for rate of time result. It's actually a time matrix. 3 positive and 1 negative, and can work in conjunction with a matrix of 3 dimensions of space, whereas the 4th time aspect of this space time matrix is the resulting rate of time of the time matrix.
Clearly it would be a miracle if I'm right Colin, but even 'wrong', I reckon it's a stunning piece of logic. Worthy of a calculate anyway.
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #23 on: 13/02/2016 00:28:26 »
Edit: reading through this thread, am I right in thinking you are proposing that the effect of gravity on time is the opposite of what current GR says, or have I misread
No, I'm not suggesting that it is the opposite to GR time dilation. GR time dilation remains as a mass near mass phenomenon. This is an additional time dilation and is the root cause of time. A parallax distance divided by square root 2, subtracting the result from the original figure, this part of the parallax distance is not distance, but is related to the slowing of time. This will comprise of 0.41+other numbers of the parallax distance.
The gravity wave length contraction becomes not a length contraction under this remit, it is a time contraction. A time contraction, under the remit of calculating it as a time dilation, will cause the maths to give you a length contraction.
#### Space Flow
• Sr. Member
• 400
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #24 on: 13/02/2016 00:44:30 »
I do realise though that it is a bit of a paradigm leap for the mind that is trained in relativity, to take on board that GR time dilation is perhaps just a "mass near mass phenomenon", (Not sure what you mean by that) that black holes are full of energy, where time runs extremely fast, (That should say slow) while the slowing of time that a traveller experiences in space is because time runs slow in space.(Proven fact time runs faster in free space.) I do not understand where you have a problem with observation fitting the theory, probably because you didn't say.
Sorry above are a couple of points that have directly been shown to be wrong, (time runs slow in space) or are mathematically extracted from theory that is based on what has directly been shown to be wrong. ( that black holes are full of energy, where time runs extremely fast). when you input more energy into anything, you slow the rate that it feels time compared to you.
And a statement I don't understand; "mass near mass phenomenon"
« Last Edit: 13/02/2016 00:49:36 by Space Flow »
We are made of Spacetime; with a sprinkling of Stardust.
Matter tells Spacetime how to Flow; Spacetime tells matter where to go
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #25 on: 13/02/2016 01:28:22 »
Ok, time has been shown to run fast in space. How has time been shown to run fast in space? By a clock. Does the clock have mass and associated mass? Yes it does. So... time has NOT been shown to run fast in space! Time has been shown to run fast for a clock and its associated mass in space. What rate time is running at in that space when the clock and its associated mass is not there, has not been proven at-all.
Therefore, this theory examines the possibility that GR time dilation is a mass near mass phenomenon, and that the rate of time runs slow in space. Light has no mass. It's frequency reduces by means of gravitational redshift. Rendering relativistic mass as redundant, this theory states the frequency of light as being indicative of the rate of time, and the increase in the wavelength as being time related not distance related.
« Last Edit: 13/02/2016 01:54:30 by timey »
#### jeffreyH
• Global Moderator
• Neilep Level Member
• 4068
• The graviton sucks
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #26 on: 13/02/2016 01:58:13 »
In non-relativistic newtonian terms the average velocity traveled in distance d is given by:
$$v_a\,=\,\frac{\sqrt{2gd}}{2}$$.
Instantaneous velocity at distance d is then:
$$v_i\,=\,\sqrt{2gd}$$
Then the instantaneous kinetic energy is
$$\frac{1}{2}mv_i^2\,=\,\frac{GMm}{r^2}d$$
Since the field extends to infinity then this function is continuous to infinity. Thus the gradient of time dilation must be continuous to infinity and will not reverse since the gravitational field is non-vanishing.
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #27 on: 13/02/2016 02:16:57 »
Jeff - I feel really stupid! Like I should know how that relates to what I'm saying...but I don't. Would you please put it into context for me?
#### Space Flow
• Sr. Member
• 400
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #28 on: 13/02/2016 02:28:42 »
Ok, time has been shown to run fast in space. How has time been shown to run fast in space? By a clock. Does the clock have mass and associated mass? Yes it does. So... time has NOT been shown to run fast in space! Time has been shown to run fast for a clock and its associated mass in space. What rate time is running at in that space when the clock and its associated mass is not there, has not been proven at-all.
Therefore, this theory examines the possibility that GR time dilation is a mass near mass phenomenon, and that the rate of time runs slow in space. Light has no mass. It's frequency reduces by means of gravitational redshift. Rendering relativistic mass as redundant, this theory states the frequency of light as being indicative of the rate of time, and the increase in the wavelength as being time related not distance related.
OK,
I need some guidance here of how you want me to approach this;
You have made some statements that are contrary to observational evidence, and you have justified that discrepancy on the fact that taking a measurement of a situation changes the situation.
We can take no direct measurement of anywhere where matter is not. Spacetime could be doing loop de loops where no one can watch it, but unless you have come up with a unique way of monitoring a location in spacetime that includes no matter, in such a way as to determine if it appears time or space distorted in some way, I can not help you.
You are also trying to disconnect space from time and say that only one is variable.
The lorenz transformation give us a conversion factor that is applied the same to all physical observations. To say that it does not apply to space but only to time is to say that all experiments for the last 100 years that have confirmed the theory of General Relativity have somehow been wrong.
You can not within the bounds of GR find any way to disconnect the two. You would have to totally throw it out and start again, with a different theory that still fitted all the observations accumulated so far.
You are asking me to try and help you calculate a geometry I can not visualise without throwing out hard evidence, or having an alternative explanation for observations that contradict your geometry.
And blaming the observations on the fact that it is something material that was used to make them, is not a good reason to push them aside. Not if we are talking Physics.
I am not that good. Sorry..
« Last Edit: 13/02/2016 02:31:32 by Space Flow »
We are made of Spacetime; with a sprinkling of Stardust.
Matter tells Spacetime how to Flow; Spacetime tells matter where to go
#### jeffreyH
• Global Moderator
• Neilep Level Member
• 4068
• The graviton sucks
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #29 on: 13/02/2016 02:53:26 »
Jeff - I feel really stupid! Like I should know how that relates to what I'm saying...but I don't. Would you please put it into context for me?
http://theory.uwinnipeg.ca/physics/work/node3.html
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #30 on: 13/02/2016 03:05:30 »
But Space Flow - I have indeed shown a means for equating what time is doing in space. It's light. Relativistic mass rendered redundant means that light is picking up its energy purely from its surroundings. Energy denotes frequency, and frequency denotes wavelength.
Furthermore, I am saying that the Lorentz transformations are faulty. Don't use them. I've given a means to finding the constants of square root 2 and 0.41 within the Lorentz transformations to 'prove' or 'disprove' my theory, because the equation that I am suggesting as an alternative should exactly match the result of the Lorentz transformations, but from a different mathematical route, and for different reason.
The alternate - d/square root 2, subtract result from d = 0.41 of d. This 0.41 of d is time related, not distance related. It takes the constant speed of light, this distance turned back into time (our rate of time) longer to travel d/square root 2 = revised distance.
You say that blaming observations on the materials used to measure them isn't physics. I am stating time as energy related. In an energy related equation, M + m is a consequence. Also m has potential energy considerations. The clock and its associated mass have more associated energy than the space it occupies does when it's not there. Light, in that relativistic mass is stated redundant, has no potential energy considerations. Therefore, in that light is 'just' picking up gravitational field energy in space, light 'is' our clock in space!
Yes, of course I'm trying to say that it is only time and not space that is variable, how else can one achieve an 'absolute reference' frame from which everything else can be equated?
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #31 on: 13/02/2016 03:21:15 »
In non-relativistic newtonian terms the average velocity traveled in distance d is given by:
$$v_a\,=\,\frac{\sqrt{2gd}}{2}$$.
Instantaneous velocity at distance d is then:
$$v_i\,=\,\sqrt{2gd}$$
Then the instantaneous kinetic energy is
$$\frac{1}{2}mv_i^2\,=\,\frac{GMm}{r^2}d$$u
Since the field extends to infinity then this function is continuous to infinity. Thus the gradient of time dilation must be continuous to infinity and will not reverse since the gravitational field is non-vanishing.
Sorry Jeff, but despite the link you provided, I'm still non the wiser as to understanding the context you have posted this in.
I kind of get that you are showing that time dilation will dilate to infinity in a non vanishing gravitational field.
Are you saying this relates to my notion of time contracting in a gravitational field?
#### alancalverd
• Global Moderator
• Neilep Level Member
• 4815
• life is too short to drink instant coffee
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #32 on: 13/02/2016 09:02:31 »
Clocks are mass.
And there's the root of a misunderstanding. GR predicts time dilation independent of the mass or density of any device you use to measure it, and the frequency of an atomic clock is not determined by the mass, density or weight of any component.
AFAIK the various clocks used by, for instance, ground stations, GPS satellites, aircraft and spacecraft, all have different masses and are surrounded by carriers of different masses, yet they all do the same thing.
When we have an entirely theoretical prediction confirmed to a remarkable degree of accuracy by several independent practical experiments, we tend to accept the primary hypothesis.
helping to stem the tide of ignorance
#### Space Flow
• Sr. Member
• 400
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #33 on: 13/02/2016 09:21:54 »
But Space Flow - I have indeed shown a means for equating what time is doing in space. It's light. Relativistic mass rendered redundant means that light is picking up its energy purely from its surroundings. Energy denotes frequency, and frequency denotes wavelength.
Furthermore, I am saying that the Lorentz transformations are faulty. Don't use them. I've given a means to finding the constants of square root 2 and 0.41 within the Lorentz transformations to 'prove' or 'disprove' my theory, because the equation that I am suggesting as an alternative should exactly match the result of the Lorentz transformations, but from a different mathematical route, and for different reason.
The alternate - d/square root 2, subtract result from d = 0.41 of d. This 0.41 of d is time related, not distance related. It takes the constant speed of light, this distance turned back into time (our rate of time) longer to travel d/square root 2 = revised distance.
You say that blaming observations on the materials used to measure them isn't physics. I am stating time as energy related. In an energy related equation, M + m is a consequence. Also m has potential energy considerations. The clock and its associated mass have more associated energy than the space it occupies does when it's not there. Light, in that relativistic mass is stated redundant, has no potential energy considerations. Therefore, in that light is 'just' picking up gravitational field energy in space, light 'is' our clock in space!
Yes, of course I'm trying to say that it is only time and not space that is variable, how else can one achieve an 'absolute reference' frame from which everything else can be equated?
OK maybe it's me that needs the help here.
With this visualisation you describe.
First I need to clear up your view on what I see as contrary evidence to that view.
Now I noticed from a few different comments here and elsewhere that people are not really clear on the principle or the method of Gravitational wave detection by systems like LIGO and VIRGO.
The detectors are two evacuated tunnels of exactly 4 Kms each. They form two arms of a laser interferometer.
This instrument is designed to measure distance to an accuracy that makes the nucleus of an atom look like a small Moon. It does nothing else other than measure length or distance whatever you want to call it.
It does this by taking the one laser beam, and splitting it into two. One is sent down one tunnel to a mirror at the end and the other one down the other tunnel.
When those identical beams come back they are heterodyned together. If all things are equal the two beams recombine into the perfect replica of the beam that was emitted, and that is what the oscilloscope or whatever high tech version of an oscilloscope they are using to analyse it.
This system depends on the fact that the system of detection depends on the constant speed of light. and the fact that if a gravity wave hits it it will come from a certain direction. Now unless that direction is not at direct right angles to both arms of the detector, then it will effect one arm before the other. As it is coming through at the speed of light, then the speed of light of the lasers are fast enough to detect this difference and tell us about it.
Also the frequency of the laser gives us the resolution.
Now light travels at the speed of light and as such it is timeless. Time dilation can have no effect on it. That means that this system is only capable of measuring the space part of spacetime.
If the distance in either arm of these detectors changes at all in its length, the two heterodyned return lasers will not be in synch and will display an interference pattern.
This is what has been reported to have happened last September in both LIGO and VIRGO detectors.
I would need a reasonable explanation for these observations from the point of view of your theory.
Secondly, we now have a large accumulation of observational data for an effect called gravitational lensing. A lot of this data is from regions that don't show any matter associated with the cause. This is what is fuelling a lot of speculation about an imaginary "Dark Matter" particle.
That aside, we have definite data that space in those places is deformed to the point of bending light.
A time dilation region of space might redshift or blueshift light, but warped time can not change light's direction by any means I am aware off. So if space can deform in such a way to achieve this, and even do it without the presence of matter, than it is not invariant. That too needs explaining.
And of course we have the main one that all of us are quite aware off. GRAVITY. How does Gravity do what it does? Einstein's and as such the worlds view for the last 100 years tells us that gravity comes about because mass deforms, bends, and twists space. Is that the description of a constant invariant space?
If you can bend and twist it, then why can't you stretch and squish it?
And if you can't do all those things to space, then what is your definition for all these things?
There are probably a lot more but explaining these will suffice.
We are made of Spacetime; with a sprinkling of Stardust.
Matter tells Spacetime how to Flow; Spacetime tells matter where to go
#### Space Flow
• Sr. Member
• 400
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #34 on: 13/02/2016 09:25:19 »
PS: I think what Jeff is trying to tell you is that there is nowhere in the Universe where you are not under the influence of Gravity.
There is no space free of gravity, as I believe you used such a description as a place that time stops.
We are made of Spacetime; with a sprinkling of Stardust.
Matter tells Spacetime how to Flow; Spacetime tells matter where to go
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #35 on: 13/02/2016 09:34:53 »
Clocks are mass.
And there's the root of a misunderstanding. GR predicts time dilation independent of the mass or density of any device you use to measure it, and the frequency of an atomic clock is not determined by the mass, density or weight of any component.
AFAIK the various clocks used by, for instance, ground stations, GPS satellites, aircraft and spacecraft, all have different masses and are surrounded by carriers of different masses, yet they all do the same thing.
When we have an entirely theoretical prediction confirmed to a remarkable degree of accuracy by several independent practical experiments, we tend to accept the primary hypothesis.
Granted Alan, and you make a good point concerning associated mass concerning the atomic clock!
To be clearer... A caesium atom has mass. It has a higher frequency at elevation. When you record this frequency, you are recording the activity of a 'body' of mass at elevation. This mass of the caesium atom in relation to the greater mass of the earth is what you are recording. You are not recording what the frequency is of the 'space' that caesium atom has been elevated to.
There was entirely logical theoretical prediction involved in the notion that the sun revolved around the earth. Look how that turned out...
Clearly, if GR gave us a full understanding of gravity, then these 'New Theory' conversations concerning gravity would be obsolete. It is only because theoretical physicists are looking for a means to link quantum to gravity, that 'looking' at alternate logic is occurring.
This is alternate logic Alan, and it is entirely logical. This doesn't, and isn't going to make it 'right' of course, but it does make it worthy of a calculate.
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #36 on: 13/02/2016 09:53:20 »
Space Flow - please forgive me! There was a certain element of humour involved in my circumnavigation of the gravity wave experiment. Of course I did not think that 4 km tubes are vertically aligned, even if partially sunk into the ground. I've been following the gravity wave experiment for quite some time with great interest, and although the maths are complicated and a bit impregnable to me, I get the premiss entirely!
However, if light gravitationally shifts when exposed to changes in a gravitational field, then the light in that experiment is 'shifted', end of story, and they will be recording a shift in time. No doubt about it! ... If they are comparing this data to the remit of a Lorentz contraction, then this equation is already taking into account the 'shift' in time. K?
I'll get on rest of your post later....
#### Colin2B
• Global Moderator
• Neilep Level Member
• 2029
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #37 on: 13/02/2016 10:12:22 »
However, if light gravitationally shifts when exposed to changes in a gravitational field, then the light in that experiment is 'shifted', end of story, and they will be recording a shift in time. No doubt about it! ...
So you are saying that the gravity wave is strong enough to affect the light beam. I don't think the system is set up to detect a time shift and remember Eddigton had to use the intense field of the sun to be able to see effect on light.
PS where ca I see your working calcs you mention, have I missed them?
and the misguided shall lead the gullible,
the feebleminded have inherited the earth.
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #38 on: 13/02/2016 10:33:57 »
Colin - just quickly because I must get on with my day... No, I am 'not' saying that the light is 'bent'... (my theory states that light cannot be bent by gravity, it's massless) ... I'm saying that the light is gravitationally 'shifted'. And yes, that a 'change' in the rate of time is occurring. That physics is calculating this 'change' as being slower. This is causing the appearance of a length contraction. If you calculate under the remit of this 'change' in time as being to quicker time. Then you can see the length has not contracted. The contracted 'time' has caused the constant speed of light to cover the distance a bit quicker is all.
Yes I do have some calculations that I derived from a 'to scale' geometrical diagram that I conceived expressing my idea. I cannot use the maths symbols thingy here on the forum. I say cannot, actually it's haven't used it before. But I will write out my formula and related constants again, photograph them and post it later. I'm not going to post my diagrams though. If you fancy a look, I can send them to you by private message, under the remit of a private message being private.
#### alancalverd
• Global Moderator
• Neilep Level Member
• 4815
• life is too short to drink instant coffee
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #39 on: 13/02/2016 11:42:47 »
A caesium atom has mass.
But the difference in energy between the hyperfine ground states of a cesium atom is not mass-dependent.
Nor, come to think of it, is the period of a pendulum!
« Last Edit: 13/02/2016 11:45:25 by alancalverd »
helping to stem the tide of ignorance
#### Colin2B
• Global Moderator
• Neilep Level Member
• 2029
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #40 on: 13/02/2016 12:07:16 »
my theory states that light cannot be bent by gravity, it's massless
It's not the mass you consider, but the momentum which is affected by gravity.
Alan is right, pendulum not affected by mass of the bob, just length and g.
No hurry for the maths as I'm quite busy this w/e. Just check over that you are not doing the equivalent of a circular argument when you talk about feeding results back in.
When you are ready pm (confidentiality respected) and I'll send you something to show an easier way to do equations on here.
and the misguided shall lead the gullible,
the feebleminded have inherited the earth.
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #41 on: 13/02/2016 12:19:04 »
A caesium atom has mass.
But the difference in energy between the hyperfine ground states of a cesium atom is not mass-dependent.
Nor, come to think of it, is the period of a pendulum!
Correct - they are both gravity related, and gravity causes changes in the rate of time...
The pendulum has a shorter swing being subject to less gravitational energy at elevation. Being a mass though, it is still subject to potential energy. If we recorded the frequency of the atoms that comprise of the pendulum at ground level, and then at elevation... we would find that the frequency of those atoms has increased at elevation. This is due to potential energy.
Same as for the caesium atom!
Light has no mass. No potential energy considerations there. The frequency of light reduces in a reduced gravitational field, because it is not experiencing any potential energy.
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #42 on: 13/02/2016 12:25:35 »
Ah Colin - well I hadn't been going to post the maths only for your benefit, which is just as well. It would seem you are telling me that you are not really all that interested.
I'd be happy for you to school me in posting mathematics. Thanks! ...and if you do find yourself interested enough to want to have a look at the diagrams, just say, and I'll send them to you.
#### alancalverd
• Global Moderator
• Neilep Level Member
• 4815
• life is too short to drink instant coffee
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #43 on: 13/02/2016 12:39:04 »
Correct - they are both gravity related, and gravity causes changes in the rate of time...
Alas, there is no mention of gravitation in the Schrodinger equation that defines the ground states of the cesium atom. That's why we use it for space clocks.
helping to stem the tide of ignorance
#### puppypower
• Hero Member
• 573
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #44 on: 13/02/2016 12:41:30 »
I was thinking last night about distance being an absolute. I was able to come up with an example. It is so obvious, everyone seemed to miss it. Consider the bond length of H2; hydrogen gas molecule. If the laws of physics are the same in all references, the bond length for hydrogen gas is an absolute that will be the same in all references.
If we alter this bond length, we will get something that is not hydrogen gas. If I am on a moving reference and I see what appears to be distance contraction, halving the H2 bond length, and hydrogen is still a gas, what I see will be an illusion. The distance for hydrogen gas is an invariant. At half that distance, hydrogen would need to change phase and become a solid. If it is not solid but still as gas, I saw an energy based illusion; violates energy conservation. Each phase occurs at specific energy, which is the same in all references.
If we only think in terms of space-time, distances are not absolute, but will be reference dependent. But once you add mass, matter, and therefore the forces of nature, you also have energy conservation. Here distances become absolute and use that to define the states and phases of matter.
In the topic of absolute distance, some argue yes and some argue no, It is actually yes and no. It is yes in terms of the phases of matter, but no if we only look at space-time. Those who try to reduce mass to space-time tend to violate energy conservation by making invariant distances, variable, thereby allowing things that defy common sense. Hydrogen gas, at half the bond length, is an illusion.
In my long rant (previous post) about special relativity, I separated SR into internal and external SR, with internal absolute and external SR, relative. Internal SR takes into account the mass/energy of the observational system; matter based contained by energy conservation, while external is only concerned with the energy that reflects off objects.
The question that comes to my mind is what is the impact of internal relativity on external relativity? Does the internal energy of a moving object impact what it sees on the outside? In GR, the space-time well parallels a pressure well connected to matter and gravity. The top of the space-time well of the sun has the lowest pressure, while the bottom of the space-time well has the highest pressures. In terms of material phases, invariant distances get smaller as we go down the space-time well due to pressure and phase changes. Materials with the smallest variant distance; core of the sun, by being at the bottom of the well see external space-time in a more contracted way.
« Last Edit: 13/02/2016 12:44:52 by puppypower »
#### alancalverd
• Global Moderator
• Neilep Level Member
• 4815
• life is too short to drink instant coffee
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #45 on: 13/02/2016 12:41:57 »
The frequency of light reduces in a reduced gravitational field, because it is not experiencing any potential energy.
No, it increases.
helping to stem the tide of ignorance
#### alancalverd
• Global Moderator
• Neilep Level Member
• 4815
• life is too short to drink instant coffee
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #46 on: 13/02/2016 12:48:02 »
If the laws of physics are the same in all references, the bond length for hydrogen gas is an absolute that will be the same in all references.
but it is known to stretch!
New Journal of Physics 5 (2003) 124.1–124.8 (http://www.njp.org/)
"....The calculated frequency for the free H2 molecule is 4190 cm−1......"
helping to stem the tide of ignorance
#### alancalverd
• Global Moderator
• Neilep Level Member
• 4815
• life is too short to drink instant coffee
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #47 on: 13/02/2016 12:57:12 »
No, I am 'not' saying that the light is 'bent'... (my theory states that light cannot be bent by gravity, it's massless) ... I'm saying that the light is gravitationally 'shifted'. And yes, that a 'change' in the rate of time is occurring. That physics is calculating this 'change' as being slower. This is causing the appearance of a length contraction. If you calculate under the remit of this 'change' in time as being to quicker time. Then you can see the length has not contracted. The contracted 'time' has caused the constant speed of light to cover the distance a bit quicker is all.
But under gravitational lensing, light takes longer to get from A to B because the path length is increased, as observed.
helping to stem the tide of ignorance
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #48 on: 13/02/2016 13:53:08 »
Yes - and the quote you quoted me on is concerning the 4km distance of the gravity wave experiment, in that I am saying a time contraction is occurring rather than a length contraction.
In space the light is taking longer to get from a to b because of time dilation rather than distance dilation.
(This is also in answer to a post you made earlier Space Flow)
In the case of gravitational lensing, a mass is passing in between our mass and the mass of the light source. This light from the light source is not bent towards the in between body/bodies of mass.
Now this is where it gets complex. I am saying that we will only be able to view 77.68% of the light of the light source. (d/square root 2, (edit: I think that might be d/0.41 instead...hmmmm) and transposed back into magnitude) But as the light passes the in between body of mass, the gravitational field changes. Now we are looking at a new calculation of d in (d/square root 2, (edit: same as last edit) and transposed back into magnitude). We are just looking at 'more' light.
This is based on the notion that between different 'rates' in the rate of time, it would be impossible to view all of the time scale of a slower time. And also that it is impossible to view all of the time scale of a faster time. You will only see a percentage as per by how much faster, or as per how much slower the other rate of time is running. This being why, in the world of quantum, something can be seen, from our rate of time, to be in 2 places at the same time.
« Last Edit: 13/02/2016 14:07:31 by timey »
#### timey
• Neilep Level Member
• 1356
##### Re: 4 a deeper discussion: Is distance an absolute invariant?
« Reply #49 on: 13/02/2016 18:41:48 »
« Last Edit: 13/02/2016 19:00:35 by timey »
|
2017-01-22 20:38:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7209463715553284, "perplexity": 1039.2293556913803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00480-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/374422/distribution-of-a-brownian-motion-with-respect-to-mathbbpx
|
# Distribution of a Brownian motion with respect to $\mathbb{P}^x$
Let $(\Omega,\mathcal{A},\mathbb{P})$ a probability space and $(B_t)_{t \geq 0}$ a Brownian motion (started in $x=0$). Then one can define a probability measure $\mathbb{P}^x$, $x \in \mathbb{R}$, on $(\Omega,\sigma(B_t; t \geq 0))$ by
$$\mathbb{P}^x(B_{t_1} \in A_1, \ldots, B_{t_n} \in A_n) := \mathbb{P}(x+B_{t_1} \in A_1, \ldots, x+B_{t_n} \in A_n)$$ where $A_j \in \mathcal{B}(\mathbb{R})$, $t_j \geq 0$.
This implies in particular $\mathbb{P}^x(B_0=x)=\mathbb{P}(B_0=0)=1$, i.e. $B_0 \sim \delta_x$ with respect to $\mathbb{P}^x$.
But what's wrong about the following argumentation? Assume that $B_0(w)=0$ for all $w \in \Omega$. Then $B_0(\cdot)^{-1}(\{x\})=\emptyset$ for all $x \not= 0$, hence in particular $\mu(B_0=x)=0$ for any measure $\mu$. This clearly contradicts $\mathbb{P}^x(B_0=x)=1$.
Thanks!
This remark is a good reason to avoid assuming that $B_0(\omega)=0$ for all $\omega$ in $\Omega$. Instead, one usually defines different measures $\mathbb P^x$ on a common probability space $\Omega$, and each $\mathbb P^x$ sees a different part of $\Omega$.
For example, one can choose for $\Omega$ the space of continuous functions $[0,+\infty)\to\mathbb R$ and define $(B_t)_{t\geqslant0}$ by $B_t:\omega\mapsto\omega(t)$, for every $t\geqslant0$. Then each $\mathbb P^x$ is a probability measure on the entire $\Omega$ but $\mathbb P^x(\Omega_x)=1$ for every $x$, where $\Omega_x=[B_0=x]$, that is, $\Omega_x\subset\Omega$ is the space of all the continuous functions $\omega:[0,+\infty)\to\mathbb R$ such that $\omega(0)=x$ (note that each $\Omega_x$ is measurable). In particular, $\mathbb P^x$ and $\mathbb P^y$ are mutually singular for every $x\ne y$ since $\Omega_x\cap\Omega_y=\varnothing$.
• I know that it works fine if I consider the canonical Wiener process, i.e. for your given example. My problem is how to prove (rigorously) that any Brownian motion is a Markov process, i.e. that there exists a measure $\mathbb{P}^x$ such that $$\mathbb{P}(B_t \in B \mid \mathcal{F}_s)= \mathbb{P}^{B_s}(B_{t-s} \in B) \, \, (s \leq t)$$ Clearly $\mathbb{P}(B_t \in B \mid \mathcal{F}_s)=p_{t-s}(B_s,B)$ for some transition function $p_r(x,\cdot)$, so by Kolmogorov's theorem I have $\mathbb{P}(B_t \in B \mid \mathcal{F}_s) = \mathbb{Q}^{W_s}(W_{t-s} \in B)$ [...] – saz Apr 28 '13 at 8:16
• where $\mathbb{Q}^x$ is a probability measure on the space of continuous functions and $W_t(w) := w(t)$ the canonical process. But I still don't grasp how to combine this with the definiton of a Markov process (where -as I already wrote- one usually requires the existence of a measure $\mathbb{P}^x$ on the probability space of the given process.) – saz Apr 28 '13 at 8:22
• As I wrote, the way out is to consider that the probability space of the given process is the full space $\Omega$ in my answer, not some $\Omega_x$ depending on $B_0=x$. – Did Apr 28 '13 at 8:31
• Sure, try $\Omega=\Omega_0\times\mathbb R$ with the obvious product sigma-algebra and $X_t(\omega)=x+B_t(\omega_0)$ for every $\omega=(\omega_0,x)$ in $\Omega$. Then, if $(B_t)$ is a Brownian motion starting from $0$ with respect to some probability $P$ on $\Omega_0$ then $(X_t)$ is a Brownian motion starting from $x$ with respect to the distribution $P\otimes\delta_x$ on $\Omega$. – Did Apr 28 '13 at 12:35
|
2020-02-22 17:57:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9861205816268921, "perplexity": 54.795486514276355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145708.59/warc/CC-MAIN-20200222150029-20200222180029-00025.warc.gz"}
|
http://mathhelpforum.com/discrete-math/20929-tautology.html
|
# Math Help - tautology
1. ## tautology
Hi! I have a tautology problem for 3 points in my class, but i am not even at the chapter involved. I am a day away. This is the problem:
Is it tautology? Why or why not?
( p ^ q) --> (p V q)
In particular, I as yet do not understand what the arrow represent or how the relation is made between the two statement.
I have to have this figured out by tomorrow. Any help appreciated.
2. This isn't something to start trying to understand the night before.
If you have p \/ q, the only way it's false is when both are false, otherwise, true.
If you have p /\ q, the only way it's true is when both are true, otherwise, false.
p ---> q, the only way it's false is when the antecedent, p, is true and the consequent, q, is false.
Start by building your truth table.
Code:
p q p /\ q p \/ q (p /\ q)--->(p \/ q)
T T
T F
F T
F F
There's a start. Now finish 'er up.
3. Hey thanks. Stuff happens. I really appreciate the help.I'll be back.
I undrstand the truth tables and what you said, but what is the arrow suppose to mean.
p q p /\ q p \/ q (p /\ q)--->(p \/ q)
T T T F
T F F T
F T F T
F F T F
I mean I understand the rules for "and" and "or," but I do not see a relation between the statement seperated by the arrow. I don't know what the arrow means.
By the way if the elements p and q are both false then the statement p/\ q is false right?
4. p--->q means p then q. The only way it is false is if p is true and q is false. Go from there. Yes, you are correct about /\. It is true only when both are true. So, if both are false, it's false.
|
2015-07-08 05:13:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6703654527664185, "perplexity": 893.3283243092915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435376073161.33/warc/CC-MAIN-20150627033433-00284-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://orkfia.fandom.com/wiki/Raid
|
## FANDOM
243 Pages
Storm into your enemy’s lands, capturing some easy acres, and killing up to 20% of the target citizens. Military losses are slim for the attacker and high for the defender. Attacking smaller tribes results in fewer returns.
• Kills a minimum of of 2x the target's acres as citizens. Race modifiers apply.
• You capture 35% extra land on your way to the enemy's tribe. These acres can be built on right away.
## Gains FormulaeEdit
Land Gained = $\frac{0.039\cdot AttLand}{\left(1+0.5\cdot e^{-8\cdot\left(\tfrac{DefLand}{AttLand}-0.8\right)}\right)^2}$
## LossesEdit
• Attacking tribe loses 0.6% of the amount of offense required to break the defender's defenses.
• Defending tribe loses 1% of defensive military units, as well as up to 20% citizens.
Community content is available under CC-BY-SA unless otherwise noted.
|
2020-08-07 08:58:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4205023944377899, "perplexity": 10784.120880127484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737172.50/warc/CC-MAIN-20200807083754-20200807113754-00124.warc.gz"}
|
http://mathhelpforum.com/trigonometry/219066-find-x-trig-equation.html
|
# Thread: Find x in trig equation
1. ## Find x in trig equation
Hi there,
"Solve 6sin2x + sinx - 2 for all x in 0 < x < pi/2"
I know through graphing that the answer is pi/6. Is there another way of proving this?
thanks
2. ## Re: Find x in trig equation
You may also treat it as a quadratic equation and then the values can be found. For ax^2 + bx + c = 0; we have x = [ -1 +/- sqrt ( b^2- 4ac )]/ 2a
In this case you will get sin x = [ -1 +/- sqrt ( 1+ 48 )]/ 12
now solve it further
3. ## Re: Find x in trig equation
So I get sin x = 1/2 and -2/3.
How do I then solve x for x in 0 < x < pi/2? Is it simply 1/2?
4. ## Re: Find x in trig equation
Sin pi/6 = 1/2 and sine is positive in first and second quadrant. Thus we have sin pi/ 6 = sin ( 5pi/6 ) = 1/2
i hope it is ok now similarly for the other value. remember sine is negative in third and fourth quadrant.
|
2017-03-24 14:34:47
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8729593753814697, "perplexity": 1264.2420026497662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188132.48/warc/CC-MAIN-20170322212948-00323-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://phys.libretexts.org/TextMaps/Astronomy_and_Cosmology_TextMaps/Map%3A_Astronomy_(Impey)/2%3A_Early_Astronomy/2.21%3A_The_Dark_Ages
|
$$\require{cancel}$$
# 2.21: The Dark Ages
After the 2nd century B.C. the Greek civilization went into decline and was eclipsed by the Roman Empire. Claudius Ptolemy was another scholar at the Alexandrian library. Around 140 A.D., he wrote a 130-volume encyclopedia that synthesized the teachings of Greek scholars. In this work, Ptolemy extended the existing star catalogs to 1022 stars. He described a geocentric model of the Solar System, with the Sun, Moon, and planets all moving around Earth in circular paths. Ptolemy's method of tracking planets, using these assumptions, gave fairly good predictions of the planets' positions. This model was accepted for a thousand years. Improving and correcting Ptolemy's system was a dangerous enterprise that cost some people their lives during the Renaissance.
Representation of Ptolemy. Click here for original source URL
Astronomy and science in general did not advance substantially under the Romans, who were more interested in such practical matters as agriculture, engineering, and governance than they were in studies of nature and the universe. With the fall of Rome in 410 A.D., maintaining the repository of knowledge in Alexandria became more difficult. Remember that this was centuries before printing; many books existed in only a few handwritten copies. Among the last guardians of the library was the first known woman astronomer, Hypatia. Widely admired for her learning and eloquence, she corresponded with leading scholars, wrote a commentary on Ptolemy's work, and invented navigation devices. But during riots that plagued Alexandria's decline, she was murdered by a mob. Over the next century, the library buildings were burned several times, and the best collection of Greek books was lost or scattered.
'"Hypatia" at the Haymarket theatre. Philammon declaring his love for Hypatia'. Print from the first page of?The Graphic, 21 January 1893. Click here for original source URL.
It was a sad point in history. Much general knowledge was lost. The scientific way of looking at the world, gained among the ancient Mediterranean cultures, languished and was forgotten. Only through circuitous and painful routes was some of this knowledge reintroduced centuries later into Europe. During the long Dark Ages that followed, other cultures were the keepers of the flame of astronomy. This period is a sobering reminder that progress and knowledge do not always march forward; they can be derailed by social chaos and unenlightened or brutal governments.
The Great Library of Alexandria, O. Von Corven, 1st century. Click here for original source URL.
|
2017-11-21 19:19:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3877555727958679, "perplexity": 5378.03794274106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806422.29/warc/CC-MAIN-20171121185236-20171121205236-00641.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-the-center-and-radius-of-the-circle-x-2-y-2-6x-8y-0
|
0
# How do you find the center and radius of the circle x^2 + y^2 - 6x + 8y = 0?
preview ×
• Edit wording or topic
Update the question or its topic
Then teach the underlying concepts
Don't copy without citing sources
preview
?
#### Explanation
Explain in detail...
#### Explanation:
I want someone to double check my answer
• 18 minutes ago
• 22 minutes ago
• 24 minutes ago
• 25 minutes ago
• 2 minutes ago
• 7 minutes ago
• 10 minutes ago
• 12 minutes ago
• 14 minutes ago
• 15 minutes ago
• 18 minutes ago
• 22 minutes ago
• 24 minutes ago
• 25 minutes ago
|
2018-05-24 21:23:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5186847448348999, "perplexity": 6868.72162256814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866870.92/warc/CC-MAIN-20180524205512-20180524225512-00200.warc.gz"}
|
http://www.eng-tips.com/viewthread.cfm?qid=23280
|
INTELLIGENT WORK FORUMS
FOR ENGINEERING PROFESSIONALS
Are you an
Engineering professional?
Join Eng-Tips Forums!
• Talk With Other Members
• Be Notified Of Responses
• Keyword Search
Favorite Forums
• Automated Signatures
• Best Of All, It's Free!
*Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.
#### Posting Guidelines
Promoting, selling, recruiting, coursework and thesis posting is forbidden.
# comparison between chilled water and dx systems3
## comparison between chilled water and dx systems
(OP)
I would dearly appretiate it if you would help in this topic
### RE: comparison between chilled water and dx systems
Very basically,
DX (direct expansion) systems: The treated airstream passes through the outside (fin side) of the evaporator coil such that it is directly cooled by the expansion of refrigerant passing through the tubes of the coil.
Chilled water systems: Water is pumped through the evaporator coil - the expansion of refrigerant passing through the coil cools the water. The (same) chilled water is pumped through a separate water coil that cools the air stream.
### RE: comparison between chilled water and dx systems
(OP)
Sorry, i didn't make myself clear. Ofcourse i know the difference bet. both systems, but i need your help to make a complete comparison (ie. from the economic point of view or for which applications one supperiors the other).
I think this is the time that this question should be made clear, now that the VRV systems are being seriously considered.
What is your cost of electrical energy as a blended kWH rate? At $.10 per KWH you are better off with a CHW water system, if the following hold true also: 1) There is money in the budget for the higher initial cost of the CHw system. 2) There are equipment operators who are capable and have the time available to handle the more complex CHW system. PacificSteve ### RE: comparison between chilled water and dx systems (OP) Thank you PacificSteve now we're starting to get to the point. So it's now the time to consider the coparison application wise, and i mean chilled water systems Vs VRV systems (or VRF systems). If any one could manage to compare in a pros and cons technique, it would be of a great help. ### RE: comparison between chilled water and dx systems Few more comparison between Chilled Water and VRV or DX systems 1. Closer control is available in chilled water system due to modulating the chilled water flow thru cooling coil. I do not think the modulation of refrigerant in a vrv system is not close enough. 2. VRV system has a limitation in refrigerant pipe length as well as height. Chilled water is a closed loop and does not have such limitations so long as the pump can handle the flow and dynamic head. 3 Chilled water can be used for a larger aplication on air side like AHUs, FCUs etc. VRV as far as know, is limited to FCUs only. 4 I feel the indoor air quality can be better maintained with with chilled water AHU with multi level filtration and fresh air rather than small recirculation type fcus with a limitation on filters used. 5 Chilled water can be used for a large spaces with a ducted system AHU. VRV system on air side is limited to small areas with short ducting. 6 In chw systems with ducted system we can locate the Ahu away from occupied spaces and thus minimise inconvenience due to occupied spaces during servicing. IN VRV Fcus are located within occupied spaces and a possible inconvenience to occpants during servicing. 7 Chw system can be used with the popular VAV system on air side where as VRV can only offer CAV system, of course with 3 speed fcus. I hope the above answers your query Regards VVVM ### RE: comparison between chilled water and dx systems (OP) Thank you VVVM, so is there anybody out there thinks that the VRV systems are supperior to chilled water systems ### RE: comparison between chilled water and dx systems VRV is typically used in the following. The hvac system in you car. The havc systems in most houses. Computer room units are typically dx. (keep the h2o out of the room). Package DX rooftops on shopping ceters / malls. Typically a job has to be over 300 tons for a chilled water system to make the economics work. ### RE: comparison between chilled water and dx systems Just wanted to emphasize---the economics of a system are highly dependent on the rate of energy costs, and the presence of people who can operate / maintain the systems. If you are paying$.03 per kWH, you would indeed need a large tonnage requirement before CHW made sense economically. At \$.15 it makes sense in much smaller tonnages.
As someone else mentioned---you also have better temperature and humidity control with CHW. Not to mention that you are not running refrigerant lines through your building in which case a refrigerant leak can be dangerous and in extreme case, could even kill someone. DX package rooftop unit are obviously exempt from this safety concern.
PacificSteve
### RE: comparison between chilled water and dx systems
I totally agree with PacificSteve regarding potential leakage problems. Cooling application below 70 deg. F will start causing trouble interms of ice formation on dx coil.
However economy of chilled water system, at and above 300 TR is totally debatable for we are using systems with even 20TR.
Regards,
Truth: Even the hardest of the problems will have atleast one simple solution. Mine may not be one.
### RE: comparison between chilled water and dx systems
In the UK we've seen a big move to Variable flow Refrigeration systems in the last 10 years. VRF has now applied typically to buildings as tall as 5/6 stories. I had drawings from a designer today which had VRF specified on an 8 story building!
Yes, control is not as good but the installation and capital costs are so low in comparison, that design and Build developers go this way every time. Maintenance and service costs are higher with VRF, but hey this is an end user issue and costs the developer nothing.
The market for DX systems in the UK is now twice as large as the chilled water system market in commercial buildings (we don't have a domestic a/c market)and this has happened even though we have guidelines in place to avoid the use of refrigerant in the space.
For me give me chilled water every time for flexiblity, maintenance, service and controlabilty.
### RE: comparison between chilled water and dx systems
Dear Arabei
VRV system is not intended for high ambiant climates such as in the gulf areas, and for quite place such as hotels and hospitals, it needs a separate leak detection system and fresh are ducting in all cases. Furthermore, supply/return ducting network is still needed if you chose to install the concealed type indoor units.
You need to stick to their control system, and their are many limitations on the selection regarding the minimum/maximum indoor units you can connect to the outdoor unit. Further, you cannot make any alteration to the system in terms of adding any accessories, guages, valves, regulators or extra refregerant circuits.
The last note, and in low humidity areas, and due to the fact that the system is designed to work at a refrigerant degree of 5.5 C, you need to add humidifiers to your space.
I recently heard that it was replaced in WTC-makka because of some of above considerations.
All the best
### RE: comparison between chilled water and dx systems
Something else to consider:
In direct DX expansion systems you only have one heat exchanger in the low side and it is the finned evaporator. It works with some temperature diference lets say 7ºC. So te suction temperature is 7ºC lower than the air being cooled (all depending on design).
In water chilled systems you need two heat exchangers (one cooling the water and the other cooling the air) so compressor suction might be lower. For example 7ºC in the water cooler plus 5ºC in the air cooler would mean a suction pressure (temperature) 12 ºC lower than the air being cooled (all depending on design).
This means that the compressor suction will be lower in the water chiller system. Normaly eficiency (COP) goes down as suction pressure goes down.
All this means lower capacity in the water system than in the DX system (for the same compressor) or more energy consumption.
### RE: comparison between chilled water and dx systems
Here are some more things to consider
Chill Water
1. Better Control, Central systems have chilled water and hot water mains circulating water to the cooling and heating coils. Any number of branches can be connected to these mains to supply heating and cooling elements with water controlled by automatic valves.
2. Centrilized Maintenance: Equipment is usually within a central mechanical room and is likely to be better maintained then equipment located on the roof.
3. Diversity: The central chilled water system can be sized for the instantanious peak heat load. Rooftop units must be sized for the peak load of the areas they serve.
4. Redundancy: Central systems provide redundancy by installing multiple chillers and pumps.
5. Flexibility: Adding new chilled water circuits to the existing systm is a relitivly simple operation. An aditional chiller and pump can be added to the existing system to produce extra cooling load.
6. Possible lower installed costs for larger systems,like over 300 tons
7. Possible lower operating costs if you use a cooling tower or an evap condenser.
1. Higher installed costs for systems under 150 tons
1.Building floor space saved since units are located on the roof.
2. Usually lower initial installed costs.
3. Simpler installation in most cases.
4. More singular responsibility because units are completely assembled and tested at the factory.
1. Higher operating costs. For example, for 15-100 ton water cooled reciprocating chiller total kw/ton 1.10
B. 20-60 ton air cooled rooftop unit Total kw/ton 1.22
C. 80-100 ton " " " " " kw/ton 1.18
2. Higher maintenance costs.
3. Shorter life expentancy.
4. Structural work may be required to support the units.
5. Weather proof duct work may be required.
6. Possible damage to the roof.
### RE: comparison between chilled water and dx systems
With VRF oil may not return to the compressor under low load conditions. Wait about 5 years till the reliability of VFR or any new technology is proven from experience. You should provide a reliable, dependable, maintainable design for your client.
#### Red Flag This Post
Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework.
#### Red Flag Submitted
Thank you for helping keep Eng-Tips Forums free from inappropriate posts.
The Eng-Tips staff will check this out and take appropriate action.
Close Box
# Join Eng-Tips® Today!
Join your peers on the Internet's largest technical engineering professional community.
It's easy to join and it's free.
Here's Why Members Love Eng-Tips Forums:
• Talk To Other Members
• Notification Of Responses To Questions
• Favorite Forums One Click Access
• Keyword Search Of All Posts, And More...
Register now while it's still free!
|
2017-02-28 12:08:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24210180342197418, "perplexity": 4392.844372307798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00637-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://myassignments-help.com/2022/11/22/lie-group-and-lie-algebra-dai-xie-math3550/
|
# 数学代写|李群和李代数代写lie group and lie algebra代考|Math3550
## 数学代写|李群和李代数代写lie group and lie algebra代考|DIRECT SUMS OF VECTOR SPACES
The direct sum, like the tensor product, is a fundamental vector space operation which finds many applications in the theory of Lie algebras and their representations. The direct sum operation in vector space theory is useful both as an analytical tool and as a constructive procedure. Corresponding to these two modes of usage, there are actually two slightly different definitions of the direct sum, known as the internal and the external direct sum. In practice there is little danger in being a bit careless on this point since these two variants are to a large extent equivalent, and the distinction between them can usually be understood from context. If $V_1$ and $V_2$ are vector spaces over a field $\mathbb{F}$, their external direct sum consists of all the ordered pairs $\left(v_1, v_2\right)$, where $v_1$ is in $V_1$ and $v_2$ is in $V_2$. The external direct sum, denoted by $V_1+V_2$, may be regarded as a vector space if vector addition and multiplication of vectors by scalars are defined component wise. A familiar example of the use of the external direct sum is in the construction of the $n$-dimensional vector space
$$\mathbb{F}^n=\mathbb{F}+\cdots+\mathbb{F} \quad \text { ( } n \text { copies) }$$
over a field $\mathbb{F}$. Another example of the use of the external direct sum is in the construction of the tensor algebra, to be discussed later.
The concept of internal direct sum is used when we are talking about the lattice of all subspaces of a given vector space. The set of all subspaces of a vector space is closed under the operation of intersection, but not under the operation of union. The sum $S_1+S_2$ of subspaces $S_1$ and $S_2$ of a vector space is the set of all elements of the form $x+y$, where $x$ is in $S_1$, and $y$ is in $S_2$. Equivalently, the sum of two subspaces may be described as the subspace spanned by their union. The set of all subspaces of a vector space is said to be a lattice under the two operations $\cap$ and $+$. This means that these operations satisfy certain axioms somewhat reminiscent of Boolean algebra, but not quite as strong [112], [134]. In particular, the distributive laws between $\cap$ and + do not hold, and there is no analogue of the de Morgan laws of complementation. More formally, a lattice is a partially ordered set in which every pair of elements has a least upper bound and a greatest lower bound. In the case of the lattice of subspaces of a vector space, the partial ordering is just the inclusion relation, while $S_1 \cap S_2$ is the greatest lower bound of $S_1$ and $S_2$ and $S_1+S_2$ is the least upper bound of $S_1$ and $S_2$.
## 数学代写|李群和李代数代写lie group and lie algebra代考|THE LATTICE OF IDEALS OF A LIE ALGEBRA
For the structure and classification of Lie algebras, the concepts of subalgebra and ideal play the same fundamental roles that subgroups and normal subgroups play in Lie group theory. If $A$ and $B$ are subspaces of a Lie algebra, we denote by $[A, B]$ the subspace spanned by all vectors $[a, b]$, where $a \in A$ and $b \in B$. A subalgebra $S$ of a Lie algebra $L$ is a subspace which is closed under the Lie multiplication, that is, which satisfies $[S, S] \subset S$. An ideal $I$ of a Lie algebra $L$ is a subalgebra such that the Lie product of an element of $L$ with any element of $I$ is in the subalgebra $I$, that is, for all $x \in L$ and all $y \in I$, we have $[x, y] \in I$. Thus, an ideal $I$ of a Lie algebra $L$ may be defined as a subspace which satisfies $[L, I] \subset I$. As may be expected, subalgebras and ideals figure in the fundamental homomorphism theorems for Lie algebras. In particular, the kernel of any Lie algebra homomorphism is an ideal, while the image is a subalgebra.
The sum and intersection of ideals of a Lie algebra are again ideals, and the ideals of a $\mathrm{Lie}$ algebra form a lattice under these two operations. In addition, it follows from the Jacobi identity that the Lie product $\left[I_1, I_2\right]$ of two ideals is again an ideal. The situation regarding subalgebras is a little bit different since the sum of two subalgebras need not be a subalgebra, although the intersection of any set of subalgebras is still a subalgebra. Nevertheless, the subalgebras of a Lie algebra still form a lattice. For both the lattice of ideals and the lattice of subalgebras, the partial ordering is inclusion and the greatest lower bound is intersection. In the lattice of ideals, the least upper bound for a pair of ideals is the sum of those ideals. In the lattice of subalgebras, the least upper bound for two subalgebras is not their sum, but rather the intersection of all subalgebras containing their union. Just as in the case of finite groups, the structure of a Lie algebra can be studied by investigating the properties of its lattice of subalgebras [22], [37], [101], [105], [141], [142].
## 数学代写|李群和李代数代写lie group and lie algebra代考|DIRECT SUMS OF VECTOR SPACES
$$\mathbb{F}^n=\mathbb{F}+\cdots+\mathbb{F} \quad(n \text { copies })$$
## 数学代写|李群和李代数代写lie group and lie algebra代考|THE LATTICE OF IDEALS OF A LIE ALGEBRA
myassignments-help数学代考价格说明
1、客户需提供物理代考的网址,相关账户,以及课程名称,Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明,让您清楚的知道您的钱花在什么地方。
2、数学代写一般每篇报价约为600—1000rmb,费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵),报价后价格觉得合适,可以先付一周的款,我们帮你试做,满意后再继续,遇到Fail全额退款。
3、myassignments-help公司所有MATH作业代写服务支持付半款,全款,周付款,周付款一方面方便大家查阅自己的分数,一方面也方便大家资金周转,注意:每周固定周一时先预付下周的定金,不付定金不予继续做。物理代写一次性付清打9.5折。
Math作业代写、数学代写常见问题
myassignments-help擅长领域包含但不是全部:
|
2022-11-28 19:03:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.911737322807312, "perplexity": 87.03286615918917}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710534.53/warc/CC-MAIN-20221128171516-20221128201516-00372.warc.gz"}
|
https://dsp.meta.stackexchange.com/questions/1659/what-to-do-if-multiple-accounts-are-ostensibly-the-same-person
|
# What to do if multiple accounts are ostensibly the same person?
Case in point:
https://dsp.stackexchange.com/posts/62182/revisions
shows that Man and user46417 are one and the same user.
Personally, I'd recommend a deletion of the "unnamed" user with pointing out that multi-usering is discouraged, but I don't know policy on that.
EDIT: Ah, Meta.SE says that accounts can be merged, so this now becomes a merge request, unless Man engaged in "sockpuppeteering" (i.e. using user46417 to upvote himself).
EDIT2: The edit history of Difference between repeated poles and non repeated poles? might suggest that also abjt and Man are the same person, although it's not that conclusive; abjt might simply have been sitting next to Man and overly eagerly tried to specify Man's question and changed directions of the question in the process. I'll simply ask.
EDIT3: OK, this is starting to feel like a conspiracy theory, but engr's Image processing vs signal processing? is, especially in the final sentence, so much like image processing vs Computer vision? that it defies the feeling of a likely coincidence.
It's not clear to me that Man and user46417 are the same. The system logs IP addresses and registered email addresses and both are very different for each user. I can also see that the two are not acting as "sock puppet" accounts. Again, the system flags cross-support and neither shows any activity like that.
|
2020-07-08 05:06:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2287834882736206, "perplexity": 1889.1024784181775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896374.33/warc/CC-MAIN-20200708031342-20200708061342-00436.warc.gz"}
|