url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://cs.stackexchange.com/questions/28285/does-this-mean-p-np
|
# Does this mean $P = NP$
I am not a formally trained guy on Complexity theory, but due to interest I am learning it. Based on different feedbacks, I have started my journey with Micheal Sipser's "Theory of Computation" (2013 edition). My question is based upon some statements from the book. My sincere apologies if the question is meaningless.
Here it goes:
1. On page 298, it is written as "We are unable to prove the existence of a single language in NP that is not in P". That implies all the NP languages are also in P.
2. Theorem 7.27 says $SAT\in P$ iff $P=NP$, and it's an implication in both directions.
3. From cook's theorem, $SAT$ is NP-complete. Hence, from the NP-complete definition $SAT \in NP$.
4. Now, from the point 3 and 1, as $SAT \in NP$ it should also be $SAT \in P$, and therefore from point 2 it should be $P=NP$.
I got stuck here, I know there is some flaw in this logical deduction (because it is not that easy to prove), but I am unable to find it. Please help me in understanding this concept, so that I can proceed further. Thanks in advance.
The important point is the iff, it means if and only if. Point 2 should be read as SAT is an element of $P$ if and only if $P=NP$, and we do not know if $SAT$ is in $P$.
Also, point one does not say all languages in $NP$ are in $P$, just that we have not proven that one is not. So Just because $SAT$ is in $NP$ does not mean it is necessarily in $P$. A proof of a language as he describes would prove $P \neq NP$
|
2019-06-20 07:12:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8105264902114868, "perplexity": 171.9257850039653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999163.73/warc/CC-MAIN-20190620065141-20190620091141-00485.warc.gz"}
|
https://stacks.math.columbia.edu/tag/0BVJ
|
Remark 49.4.7. Let $f : Y \to X$ be a flat locally quasi-finite morphism of locally Noetherian schemes. Let $\omega _{Y/X}$ be as in Remark 49.2.11. It is clear from the uniqueness, existence, and compatibility with localization of trace elements (Lemmas 49.4.2, 49.4.6, and 49.4.4) that there exists a global section
$\tau _{Y/X} \in \Gamma (Y, \omega _{Y/X})$
such that for every pair of affine opens $\mathop{\mathrm{Spec}}(B) = V \subset Y$, $\mathop{\mathrm{Spec}}(A) = U \subset X$ with $f(V) \subset U$ that element $\tau _{Y/X}$ maps to $\tau _{B/A}$ under the canonical isomorphism $H^0(V, \omega _{Y/X}) = \omega _{B/A}$.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2023-01-28 16:13:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9660080075263977, "perplexity": 327.5212042319273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499646.23/warc/CC-MAIN-20230128153513-20230128183513-00189.warc.gz"}
|
https://www.numerade.com/questions/when-a-star-has-exhausted-its-hydrogen-fuel-it-may-fuse-other-nuclear-fuels-at-temperatures-above-10/
|
Nuclear Physics
Particle Physics
### Discussion
You must be signed in to discuss.
##### Christina K.
Rutgers, The State University of New Jersey
LB
### Video Transcript
in this question. Yeah. Looking its hold star, actually refuse. Undergo in your confusion. And what a way that it's possible is the fusion off so far particles and off course alphabetical after diffusion. And you can't gonna go for the fusion. So we're gonna go get Why did he processes? So I give up. The first reaction is to alpha particles fusing together to form an unknown You could see along with a summary. So in order to identify what is the unknown? You kousky, we can use t conservation off our nuclear number. Spls, The proton number I saw a nuclear number is two number at the top and put on almost one test below. And the left inside the total brought a number. Must be the same as the right hand side. So they stack the conservation off the proton number Elevens I received. That's a total of four. So on the right, we must have for us. Well, since camera, it doesn't have any protons a myself for once, say, for the nuclear number, this must be in eat since left side and sigh cure for blast forages eat Looking up the periodic table for an element with four protons. We see that this is actually really right. So this is actually verily, um, it No, you're given that this really heat actually undergoes further absorption off a how far particle should be right and gives us another new crispy with a company. Now, what is thesis on? Don't be. We do the same process there is. We look at the total proto number in total nuclear numbers and you must be constant. So we have six proton number and shelf. So looking at the periodic table for an element with six protons, that must be coupled. So this is actually covered. Come on, shelf now to find what if the throat energy least we are going to combine this too Reactions into one every. Do that by cancelling out. You know, common common elements that common new collapsed actually appear on both sites off the equation. That is, if you add this up together, we get tree off the l four porticos. Plus it's four very young. Your first for very Liam with Gorbachev. If two primaries, we see that very them actually appears on both sex, so we can actually know this. They will cancel each other out, not to find. What s the energy does released from here? That is cute value you need to find What is the change in the mess? Multiplied it by C squared. So change your message is defined as the initial rectum. Smith's minus away the product's mess, right? And so the messes for the intro victims will be three times the mass off Alpha particle in every subject by D mess off carbon show. Bye bye. See Square, No case the mess Softy now for particle. Just he he loved you can't. Yes. 4.26 Your tree, you mess off Gorbachev is chop off you c Square. We're gonna use the form 91.4 night for MTV for you. Since we are doing we four point mess units putting it to co creator, we should get a very off 7.27 and maybe
National University of Singapore
#### Topics
Nuclear Physics
Particle Physics
##### Christina K.
Rutgers, The State University of New Jersey
LB
|
2021-04-15 11:22:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4997478425502777, "perplexity": 1753.228479947246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038084765.46/warc/CC-MAIN-20210415095505-20210415125505-00332.warc.gz"}
|
https://aplusphysics.com/community/index.php?/blogs/entry/40-derivation-unknown/
|
• entries
4
14
• views
1,674
Derivation Unknown
533 views
[ATTACH=CONFIG]61[/ATTACH]
So as I was chugging along on the Rotational Motion WebAssign I was startled to notice a seemingly coincidental relationship between my givens and my answer. But after, procrastinating longer than is healthy, trying it with other numbers, the relationship was consistent.
This pertains to question 2 on the WebAssign. I found that, for a record on a turntable with an initial rpm that slows with a constant angular acceleration until rest in time t in minutes, the number of revolutions the record makes before stopping x = (rpm)(t)/2.
So if, when the turntable is shut off, the record is rotating at 100 rpm and comes to a stop in 24 seconds or 0.4 minutes, the record will make (100)(0.4)/2 = 20 revolutions before coming to rest. You can take a step further to find the angular displacement θ by multiplying by 2π . From this we get θ = (rpm)(t)(π)
I am wondering whether there is a clear derivation and reason for this. After linking it to θ, I think I know why it works and makes sense sorta. It's finding an average. Really it should be Δrpm. It's still a little fuzzy to me.
Can you clear it up?
I'd like to speculate on this... cool find
$x=vt + \frac{1}{2}at^{2}$
change to rotational, If you assume starting at 0 and speeding up, the $\omega_{o}t$ term goes away.
$\theta = \frac{1}{2}\alpha t^{2}$
$(\alpha = \frac{\triangle \omega}{t}=\frac{\omega_{o}}{t})$
sub in...
$\theta = \frac{1}{2}\frac{\omega_{o}}{t}t^{2}$
$\theta = \frac{\omega t}{2}$
Groovy! That makes a lot of sense. Thanks for tying that down!
× Pasted as rich text. Paste as plain text instead
Only 75 emoji are allowed.
× Your previous content has been restored. Clear editor
× You cannot paste images directly. Upload or insert images from URL.
×
×
×
• Create New...
|
2022-08-14 03:12:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5959022641181946, "perplexity": 1814.8012696736587}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571993.68/warc/CC-MAIN-20220814022847-20220814052847-00459.warc.gz"}
|
http://blog.math.toronto.edu/GraduateBlog/tag/cms/page/2/
|
## Canadian Mathematical Society Student Committee
**Our grad student Jerrod Smith is a member of the CMS Student
Committee and he is organizing the student poster session at the
2011 Winter CMS meeting to be held in Toronto.**
Hi everyone,
CMS Student Committee is inviting you to present a poster
at the CMS Student Poster Session. The poster session will take
place on December 10-11, 2011 at the site of the CMS Winter
meeting in Toronto.
This is great opportunity to present your research in a more relaxed
atmosphere without the pressure of giving a talk. The poster can be on
your current or previous research, it could simply be a survey of the
topic you are planning to start your research in or even just a fun
and interesting topic of mathematics. There will be judging and the
top three posters will be awarded cash prizes as well as two
complimentary banquet tickets each.
to http://math.ca/Events/winter11/students. The deadline for the
registration is October 31st.
All the best,
Student Committee.
---
(Notes From the Margin) is available at
http://math.ca/Students/Newsletter/
# YOUNG MATHEMATICIAN TO RECEIVE PRESTIGIOUS AWARD
Youness Lamzouri to Receive 2011 CMS Doctoral Prize
Youness Lamzouri
OTTAWA, Ontario — The Canadian Mathematical Society (CMS) is pleased to announce that Youness Lamzouri is the recipient of the 2011 Doctoral Prize. The CMS Doctoral Prize recognizes outstanding performance by a doctoral student. Lamzouri will receive his award and present a plenary lecture at the 2011 CMS Winter Meeting in Toronto.
“Students pursuing a doctorate in mathematics are crucial to the growth and development of mathematics in Canada as well as to discovery and advancement in the fields of science and technology,” said Jacques Hurtubise, President of the CMS. “Youness Lamzouri has made considerable contributions to mathematics through his doctoral research and is highly deserving of this prize.”
“Youness Lamzouri emerges from his doctoral studies as a fully fledged mathematician,” said Andrew Granville (University of Montreal), Lamzouri’s PhD thesis supervisor. “He is a strong researcher, a very good writer of mathematics, and a clear effective teacher and lecturer who is popular with students at different levels.”
Lamzouri’s research is in the area of analytic number theory. His thesis provides a first good understanding of extreme values of the Riemann zeta-function (and of all -functions) at the edge of the critical strip, an area involved in some of the most difficult and central problems in analytic number theory.
“There was already a good understanding of the distribution of in its full range, as varies, but Lamzouri was able to give some idea of the distribution of in the same range, showing that it is more dense near the real axis than had perhaps been expected,” said Granville.
Another striking aspect of Lamzouri’s thesis work is his use of analytic techniques to understand questions on diophantine approximation (and thus settle a dispute as to the basis of the Lang-Waldschmidt conjecture on the limit of linear forms in logarithms); and in using diophantine approximation techniques (the Lang-Waldschmidt conjecture) to greatly extend the range of Fourier analysis involving ‘s.
Youness Lamzouri obtained his PhD in mathematics from the University of Montreal in 2009. After graduation, he obtained an NSERC postdoctoral fellowship, and participated in the 2009-2010 special year on Analytic Number Theory at the Institute for Advanced Study in Princeton. He was the recipient of the 2004 Jean-Maranda Award for the best finishing undergraduate student in mathematics from the University of Montreal, and the 2006 Carl Herz Prize from the Institut des sciences mathématiques (ISM). Youness is currently a J. L. Doob Research Assistant Professor at the University of Illinois in Urbana-Champaign.
Laura Alyea Communications and Special Projects Officer Canadian Mathematical Society (613) 733-2662 ext. 728 commsp@cms.math.ca or Dr. David Brydges, Chair CMS Research Committee Department of Mathematics University of British Columbia (604) 822-3620 chair-resc@cms.math.ca
## Call for submissions to Notes from the Margin
Hi everyone,
The CMS Student Committee (Studc) would like to invite you to submit
research-related articles, opinion pieces and math-related stories to
the second edition of Notes from the Margin.
We distribute the Margin to over 50 universities in Canada and very
soon we hope to have a copy of it available at every university across
the country. Meanwhile, you can find the electronic version of the first
issue of the Margin at
CMS Studc.
|
2019-06-17 05:34:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3559015989303589, "perplexity": 2002.5729712151146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998376.42/warc/CC-MAIN-20190617043021-20190617065021-00081.warc.gz"}
|
https://zenodo.org/record/1322192/export/csl
|
Journal article Open Access
# Beyond the Decisions-Making: The Psychic Determinants of Conduct and Economic Behavior
Salatino, D. R.
### Citation Style Language JSON Export
{
"publisher": "Zenodo",
"DOI": "10.5281/zenodo.1322192",
"container_title": "Inter. J. Res. Methodol. Soc. Sci.",
"language": "eng",
"title": "Beyond the Decisions-Making: The Psychic Determinants of Conduct and Economic Behavior",
"issued": {
"date-parts": [
[
2017,
3,
31
]
]
},
"abstract": "<p>The objective of this paper is to provide a useful tool to evaluate the impact of conduct and economic behavior in decision making. It is a research based on a theory of the psychic structure and operation with a marked neurobiological support. The use of a new method is introduced: the Transcurssive Logic, to investigate the subjective reality of which, the economy, forms part. Are corroborated the hypotheses suggested by Hayek in his treatise on Theoretical Psychology: <em>The Sensible Order</em> (1952), and they are given foundation to the psychic processes that give rise to both the behavior as the conduct. It constitutes a basic contribution to Economic Psychology.</p>",
"author": [
{
"family": "Salatino, D. R."
}
],
"page": "6-24",
"volume": "3",
"version": "1",
"type": "article-journal",
"issue": "1",
"id": "1322192"
}
38
19
views
downloads
All versions This version
Views 3838
Downloads 1919
Data volume 10.5 MB10.5 MB
Unique views 3333
Unique downloads 1717
|
2020-08-08 05:12:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17304961383342743, "perplexity": 10455.714712915373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737289.75/warc/CC-MAIN-20200808051116-20200808081116-00236.warc.gz"}
|
https://unmethours.com/question/31742/energy-efficiency-definition-including-actual-use-of-the-building/?sort=latest
|
Question-and-Answer Resource for the Building Energy Modeling Community
Get started with the Help page
Energy Efficiency definition including actual use of the building
I am part of a group of architects and engineers that is thinking about a definition for a Real Energy Neutral Building definition. As many of you will know there are many different definitions and adding one to the list might lead to more confusion, so we try to have a holistic approach and find a definition that might incorporate several other definitions as well. This is still in a concept phase, and it is more a philosophical discussion than a technical one. Nevertheless I would appreciate input for what I have written below. Perhaps someone else has already thought of it and it would be a waste to re-invent the wheel.
Of course many will say that energy neutral means that the energy bill at the end of the year is 0, or something similar. But if a building is not used, or only partly used, it does not or not fully serve its purpose whilst the energy consumption can come pretty close to that 0, especially with on-site energy generation. We started to think about how the use of the building can be encouraged more whilst still saying it is (close to) an Energy Neutral Building.
Basically the idea is this: The "Definition" is a standardized method for determining the energy efficiency indicator based on the real energy use of the building. The energy efficiency is not just defined as energy consumption per m2/ft2 but it also incorporates the amount of time the building is actually used. In my opinion an unused building is simply a large and expensive piece of art with no real purpose other than to look pretty (or not, depending on your taste). If we take the amount of hours that a building is used for normal office hours and use that as a baseline (lets say 65 hours/m2 per year) a building with an energy consumption of 1,386,000 kWh and floor area of 12.000 m2 will have an energy use intensity of 115,5 kWh/m2. This would be the traditional EUI. But if we incorporate the use of the building as a correction factor we might see a different picture. Let's say the total number of hours the building is used is 100h/m2/year. The same energy use intensity would be different: 115.5 / (100/65) = 75 kwh/m2/year. In some simulations we have run we could see the total energy consumption going up when buildings were used more intensively, but the "Definition" that we used showed that it dropped or stayed the same.
This may be quite different from the way we are used to measure the EUI, but with this "Definition" the energy is not just used for creating a healthy/comfortable indoor climate but also incorporates how effectively we actually use the building. It also uses the actual data of the building and the way it is used. It is therefor not exactly a simulation methodology, but I think the people ...
edit retag close merge delete
Your use of commas and periods for the numbers you are using for an example seem inconsistent. And could you explain the units 65 hours/m2 per year. I'm not sure how the m2 is being used.
( 2018-05-29 06:38:20 -0600 )edit
Apologies for the confusion. The 65 hours/m2 per year is defined by taking 2.600 hours per year per person, divided over an average of 40/m2 of floor area per person. With this definition decreasing the floor area (increasing the amount of people) or having longer working hours will show a more utilized building and as such a lower EUI in this new format.
( 2018-05-29 14:48:52 -0600 )edit
What is the driving force behind creating this definition? You divisors may change quite drastically depending on the aim. Are you more interested in energy cost? cost per person? carbon emissions? personal comfort? GDP/economic productivity per energy used? transmission/grid intensity? These are different goals that would suggest different metrics. In particular, annual zero energy is becoming obsolete as a goal with more renewables on the grid; it is helpful to use more energy when there is an oversupply, and curtail use at peak times. An aggregate annual value doesn't capture this.
( 2018-05-29 18:07:33 -0600 )edit
Sort by » oldest newest most voted
Thank you for the reports! I will have a look at them and see if there are any similarities. It would indeed be a challenge to have the right hours. Right now we only focus on non-residential, non-industry buildings. The idea now is that we look at the hours the building is being used with the option of going deeper and looking at the number of people as well. If you have a security access card system the access to the building can already be defined much easier. It is also our goal to set it up as a certification system where the building owner or user will have to live up to certain criteria preventing "cheating" when the building is used.
Another way to verify if people are present is if the HVAC systems are used with certain setpoints, or with a certain air quality. In this case the residential part could be made more automatised and has (I think) the best chance of coming close to reality.
In the early and late hours less people will be present yes, but this would also be seen in the energy consumption. HVAC and lighting will be on, but the cooling loads will be less because of less people and in most if not all climate zones also a lower outdoor air termperature. Of course in winter the energy consumption will be higher, but this is also the reality right?
Again, this is just a concept that is far from finished. The goal is to make building owners aware of how efficient their building is being used in relation to the real and anticipated energy consumption and also as a verification of a certain ambition that has been set when the building was designed and built, but with leaving room for a different use for the building than was initially assumed.
Here in the Netherlands our governmental energy calculation method is static and far from accurate. An assumption is made for all building types in usage and capacity etc. Building simulations are not uncommon for offices and the like, but they are also not common and definitely not standardized. I think that is part of the reason why this discussion has started, since the market is not used to calculating it in such detail as with simulations. Quite a shame really...
more
Please use the add a comment feature for comments rather than the answer section.
( 2018-05-29 15:56:24 -0600 )edit
Many reports related to the general topic of how to define zero energy buildings have been published. NREL seems to have spent a lot of time on the topic with:
A Common Definition for Zero Energy Building https://www.energy.gov/sites/prod/fil...
Zero Energy Buildings: A Critical Look at the Definition https://www.nrel.gov/docs/fy06osti/39...
Net-Zero Energy Buildings: A Classification System Based on Renewable Energy Supply Options https://www.nrel.gov/docs/fy10osti/44...
I think your idea is interesting since it accounts for the utilization of a building and rewards buildings that are utilized more but I think the difficulty you would find is how to clearly define the number of hours a building is used. That is often a soft number. An office building can say that it is open 6am to 8pm even though most employees only work 9am to 5pm. Someone will work until 8pm, does that mean the building is really being utilized all those extra hours? How about an apartment building? Does it have 24 hour operation even though for many hours of the day only a fraction of the peak number of occupants are present. I think you will find that almost no buildings have a clearly defined hours of occupancy.
Further, for rating purposes, the owner of the building will argue that the building is open and available to be used for many more hours than it might be just because the want to lower this metric. Allowing the owner to have direct control over part of the metric, other than energy use, seems like it is fraught with problems.
Overall, I think this approach has some potential but defining the number of hours is going to be difficult.
more
|
2022-01-24 03:04:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4874213635921478, "perplexity": 642.3903570829021}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304471.99/warc/CC-MAIN-20220124023407-20220124053407-00243.warc.gz"}
|
https://jp.mathworks.com/help/phased/ref/phased.isotropichydrophone-system-object.html
|
# phased.IsotropicHydrophone
Isotropic hydrophone
## Description
The `phased.IsotropicHydrophone` System object™ creates an isotropic hydrophone for sonar applications. An isotropic hydrophone has the same response in all signal directions. The response is the output voltage of the hydrophone per unit sound pressure. The response of a hydrophone is also called its sensitivity. You can specify the response using the `VoltageSensitivity` property.
To compute the response of a hydrophone for specified directions:
1. Define and set up an isotropic hydrophone System object. See Construction.
2. Call `step` to compute the response according to the properties of `phased.IsotropicHydrophone`.
Note
Instead of using the `step` method to perform the operation defined by the System object, you can call the object with arguments, as if it were a function. For example, `y = step(obj,x)` and `y = obj(x)` perform equivalent operations.
## Construction
`hydrophone = phased.IsotropicHydrophone` creates an isotropic hydrophone System object, `hydrophone`.
`hydrophone = phased.IsotropicHydrophone(Name,Value)` creates an isotropic hydrophone System object, with each specified property `Name` set to the specified `Value`. You can specify additional name-value pair arguments in any order as (`Name1,Value1`,...,`NameN,ValueN`).
## Properties
expand all
Operating frequency range of hydrophone, specified as a real-valued 1-by-2 row vector of the form `[LowerBound HigherBound]`. This property defines the frequency range over which the hydrophone has a response. The hydrophone element has zero response outside this frequency range. Units are in Hz.
Example: `[0 1000]`
Data Types: `double`
Voltage sensitivity of hydrophone, specified as a scalar or real-valued 1-by-K row vector. When you specify the voltage sensitivity as a scalar, that value applies to the entire frequency range specified by `FrequencyRange`. When you specify the voltage sensitivity as a vector, the frequency range is divided into K-1 equal intervals. The sensitivity values are assigned to the interval end points. The `step` method interpolates the voltage sensitivity for any frequency inside the frequency range. Units are in dB//1V/μPa. See Hydrophone Sensitivity for more details.
Example: `10`
Data Types: `double`
Backbaffle hydrophone element, specified as `false` or `true`. Set this property to `true` to backbaffle the hydrophone. When the hydrophone is backbaffled, the hydrophone response for all azimuth angles beyond ±90° from broadside are zero. Broadside is defined as 0° azimuth and 0° elevation.
When the value of this property is `false`, the hydrophone is not backbaffled.
## Methods
Specific to `phased.IsotropicHydrophone` Object
`beamwidth`
Compute and display beamwidth of sensor element pattern
`directivity`
Directivity of isotropic hydrophone
`isPolarizationCapable`
Polarization capability
`pattern`
Plot isotropic hydrophone directivity and patterns
`patternAzimuth`
Plot isotropic hydrophone directivity and response patterns versus azimuth
`patternElevation`
Plot isotropic hydrophone directivity and response patterns versus elevation
`step`
Voltage sensitivity of isotropic hydrophone
Common to All System Objects
`release`
Allow System object property value changes
## Examples
collapse all
Examine the response and patterns of an isotropic hydrophone operating between 1 kHz and 10 kHz.
Set up the hydrophone parameters. Obtain the voltage sensitivity at five different elevation angles: ${-30}^{\circ }$, ${-15}^{\circ }$, ${0}^{\circ }$, ${15}^{\circ }$ and ${30}^{\circ }$. All azimuth angles are at ${0}^{\circ }$. The sensitivities are computed at the signal frequency of 2 kHz.
```hydrophone = phased.IsotropicHydrophone('FrequencyRange',[1 10]*1e3); fc = 2e3; resp = hydrophone(fc,[0 0 0 0 0;-30 -15 0 15 30]);```
Draw a 3-D plot of the voltage sensitivity.
```pattern(hydrophone,fc,[-180:180],[-90:90],'CoordinateSystem','polar', ... 'Type','powerdb')```
Examine the response and patterns of an isotropic hydrophone at three different frequencies. The hydrophone operates between 1 kHz and 10 kHz. Specify the voltage sensitivity as a vector.
Set up the hydrophone parameters and obtain the voltage sensitivity at 45° azimuth and 30° elevation. Compute the sensitivities at the signal frequencies of 2, 5, and 7 kHz.
```hydrophone = phased.IsotropicHydrophone('FrequencyRange',[1 10]*1e3, ... 'VoltageSensitivity',[-100 -90 -100]); fc = [2e3 5e3 7e3]; resp = hydrophone(fc,[45;30])```
```resp = 1×3 14.8051 29.2202 24.4152 ```
Draw a 2-D plot of the voltage sensitivity as a function of azimuth.
```pattern(hydrophone,fc,[-180:180],0,'CoordinateSystem','rectangular',... 'Type','power')```
expand all
## References
[1] Urick, R.J. Principles of Underwater Sound. 3rd Edition. New York: Peninsula Publishing, 1996.
[2] Sherman, C.S., and J. Butler. Transducers and Arrays for Underwater Sound. New York: Springer, 2007.
[3] Allen, J.B., and D. Berkely. “Image method for efficiently simulating small-room acoustics”, Journal of the Acoustical Society of America. Vol. 65, No. 4. April 1979, pp. 943–950.
[4] Van Trees, H. Optimum Array Processing. New York: Wiley-Interscience, 2002, pp. 274–304.
## Extended Capabilities
Introduced in R2017a
|
2021-12-08 04:05:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8704736828804016, "perplexity": 4347.853395343622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363437.15/warc/CC-MAIN-20211208022710-20211208052710-00004.warc.gz"}
|
http://clay6.com/qa/46002/let-a-5-the-following-statement-is-correct-or-incorrect-and-why-subset-a
|
Browse Questions
Home >> AIMS >> Class11 >> Math >> Sets
# Let A= {1, 2, {3, 4,}, 5}. The following statement is correct or incorrect and why? { $\phi$} $\subset$ A
Can you answer this question?
incorrect because $\phi$ $\in$ {$\phi$}; however, $\phi$ $\in$ A.
|
2017-03-30 14:44:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2018706351518631, "perplexity": 593.1742346610287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218194601.22/warc/CC-MAIN-20170322212954-00153-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://dggs.alaska.gov/pubs/id/27657
|
# Chiao, L.Y., 1991
## Membrane deformation rate and geometry of subducting slabs
Chiao, L.Y.
1991
• ### Publisher:
University of Washington, Seattle
• ### Ordering Info:
Not available
Offshore
27657
## Bibliographic Reference
Chiao, L.Y., 1991, Membrane deformation rate and geometry of subducting slabs: University of Washington, Seattle, Ph.D. dissertation, 157 p.
## Abstract
The subduction process forces the oceanic lithosphere to change its geometric configuration from a spherical shell to the geometry delineated by the Wadati-Benioff seismicity. This change induces lateral membrane deformation in the slab in addition to the bending deformation typically analyzed in two-dimensional cross sections. Observations including the along-arc variations of slab geometry, seismic activity, and orientation of earthquake focal mechanisms, suggest that this membrane deformation is an important mechanism in controlling the evolution of the subduction zone structure and seismic generation pattern. To quantify this type of slab deformation, we assume that subducting slabs behave like thin, viscous sheets with either Newtonian or Power-Law rheology, flowing into a mantle with significantly lower viscosity. A non-linear optimization scheme is developed to find the slab geometry and the subduction flow field, minimizing the integrated total dissipation power by fixing boundary conditions constrained by the Wadati-Benioff seismicity and the relative plate convergence. The rationale behind this optimization is that since the subducted slab has strong resistance to membrane deformation and relatively little strength to respond to slab normal forces, finding the optimal configuration with the least amount of membrane deformation will thus provide insights on both the slab structure and the pattern of slab deformation. Experiments on the Cascadia subduction zone suggest that the proposed arch structure is a natural consequence of the subducted slab responding to the concave-oceanward trench. The arch also provides a plausible explanation for the origin of the Olympic Mountains accretionary prism in the context of the Critical Taper Theory. The concentration of seismicity beneath the Puget Sound area may be the result of bending the already arched slab. The computed deformation rate is dominated by along-arc compression under Puget Sound and the peak compressional strain rate is around 2 $\times$10$\sp{-16}$sec$\sp{-1}$, which is comparable to the value estimated from the total intraplate seismic moment release during the last century. In both the Alaska-Aleutian and NW-Pacific subduction zones, preliminary experiments predict similar arch structures. Also, modeling results provide plausible explanation for along-arc variations of the deformation regime in slabs that are not resolvable by 2-D approaches.
## Keywords
Theses and Dissertations
Top of Page
|
2020-10-25 17:06:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4631088674068451, "perplexity": 3847.295162419522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889574.66/warc/CC-MAIN-20201025154704-20201025184704-00299.warc.gz"}
|
https://codegolf.stackexchange.com/revisions/32097/4
|
4 of 4 added 546 characters in body
## PowerShell, 170167 166
[int]$x,[int]$y,$e,$m="$input"-split'\W'$d='NESW'.indexof($e) switch([char[]]$m){'R'{$d++}'L'{$d--}'M'{iex(-split'$y++$x++ $y--$x--')[$d%4]}} "$x,$y,"+'NESW'[$d%4]
Can't seem to golf this down further, which is a bit embarrassing. But all the obvious hacks don't really work here.
I can't iex the input because a) N, S, E and W would have to be functions for that to work (or I'd need to prefix that with $and b) 1,2,N would have to parse the N in expression mode, not being able to run a command. The switch seems to be the shortest way of doing the movement. Hash table with script blocks or strings isn't shorter either and for every other way apart from the switch I'd have the overhead of the explicit loop. I can't get rid of the IndexOf because a pipeline with ? is longer, still. I also can't get rid of the explicit types in the initial declaration because I have mixed types there, so a simple |%{+$_} doesn't help and every other option is longer.
Sometimes I hate input handling in PowerShell.
|
2019-08-23 08:47:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6714155673980713, "perplexity": 3066.6355699820015}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318243.40/warc/CC-MAIN-20190823083811-20190823105811-00509.warc.gz"}
|
https://data.sandiegodata.org/dataset/california-academic-performance-index-1999-2012/
|
# California Academic Performance Index, 1999 – 2012
API scores for California schools
cde.ca.gov-api_scores-1
## Documentation
Compilation of multiple years of California Academic Performance Index scores into single files.
This dataset includes two files that combine multiple years of the California Academic Performance Index into single CSV files. These files have a few advantages over the oroginal data releases from The California Department of Education:
• Multiple years of records are combined into a single file
• Files are converted to CSV format to be useful with a wider range of software tools
• Column names are normalized to be consistent across years
• Data is presented in a consistent schema, with some erroneous values removed
If you do not need to analyze multiple years and can work with fixed with text or DBF files, the original, single-year source files may be more appropriate.
This data is a beta release, and has not been fully verified for fidelity with the upstream source. Use with caution.
This file and the attached schema.csv and documentation.html files describe the changes made to the original files in this compilation. See the California Department of Education website for complete documentation on the data in these datasets.
• Wrangler
|
2020-10-22 02:51:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1799352765083313, "perplexity": 3085.272085827616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878879.33/warc/CC-MAIN-20201022024236-20201022054236-00622.warc.gz"}
|
https://www.eduzip.com/ask/question/find-the-values-of-the-unknown-x-and-y-in-the-following-diagram-520958
|
Mathematics
# Find the values of the unknown x and y in the following diagram.
##### SOLUTION
Vertically opposite angles are equal
$x = 60^o$
Sum of all internal angles of a $\Delta le$ is $180^o$
$x + y + 30^o = 180^o$
$60^o + 30^o + y = 180^o$
$\therefore y = 90^o$
You're just one step away
Subjective Medium Published on 09th 09, 2020
Questions 120418
Subjects 10
Chapters 88
Enrolled Students 86
#### Realted Questions
Q1 Subjective Medium
In the given figure, $AOB$ is a straight line. Find the value of $x$ and also answer each of the following:
$\angle AOP =.....$
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
Q2 Subjective Medium
In the figure. Find x , If $l\parallel m$ & $p\parallel q$
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
Q3 Subjective Medium
Find the value of $x$
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
Q4 Single Correct Medium
In the figure, if $\angle q=140$, what is the value of $\angle r- \angle p$?
• A. $0^o$
• B. $10^o$
• C. $130^o$
• D. $90^o$
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
Q5 Subjective Medium
The difference between two supplementary angles is $100^o$. Then find the angles.
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
|
2022-01-24 04:28:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6000694036483765, "perplexity": 6736.692639510697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304471.99/warc/CC-MAIN-20220124023407-20220124053407-00099.warc.gz"}
|
http://blog.math.toronto.edu/GraduateBlog/2011/09/01/departmental-phd-thesis-exam-travis-squires/
|
## Departmental PhD Thesis Exam – Travis Squires
Wednesday, September 7, 2011, 10:10 a.m., in BA 6183
PhD Candidate: Travis Squires
(http://www.pispace.org/thesis/Thesis-Squires.pdf)
We begin our discussion by introducing some notions from operad
theory; in particular we discuss homotopy algebras over a quadratic
operad. We then proceed to describe Lie 2-algebras as homotopy
and Kapranov. Next we briefly discuss the structure of a braided
monoidal category. Following this, motivated by our discussion of
braided monoidal categories, a new structure is introduced which
we call a commutative 2-algebra. As with the lie 2-algebra
case we show how a commutative 2-algebra can be seen as a
distributive laws and Koszul operads.
|
2023-02-09 06:37:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6082237362861633, "perplexity": 3077.972388842433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501407.6/warc/CC-MAIN-20230209045525-20230209075525-00747.warc.gz"}
|
https://blog.mbedded.ninja/electronics/communication-protocols/high-level-data-link-control-hdlc/
|
COMMUNICATION PROTOCOLS
High-Level Data Link Control (HDLC)
Date Published: July 18, 2017 Last Modified: July 18, 2017
Overview
High-Level Data Link Control (HDLC) is a synchronous data-link layer protocol. It was developed by the International Organization for Standardization (ISO).
It only describes the data-link layer (layer 2 in the OSI model), and therefore is not really considered an communication protocol in it’s own right. It may be used by other communication protocols such as LVDS.
Acronyms/Glossary
TermDefinition
TermDefinition
Cisco HDLCAn extension to HDLC created by Cisco. Also known as cHDLC. Cisco HDLC uses an alternative framing structure to ISO HDLC.
FrameA unit that data is organised into and sent across the HDLC bus in.
HDLCHigh-level Data link Control.
PPPPoint-to-Point Protocol.
SLARPSerial Line Address Resolution Protocol. A control protocol used by Cisco HDLC to maintain serial link keepalives.
X.25A protocol for packet switched WAN communication. Very popular in the 1980's.
History
HDLC is a superset of IBM’s SDLC protocol (SDLC can be essentially imitated with HDLC in Normal Response Mode (NRM).
Framing
Framing for a HDLC packet is done with the special bit sequence 0x7E (0b01111110). This is used to mark the beginning and end of each frame. Because 0x7E can only be used to as a framing sequence, any occurrence of 0x7E in the packet must be removed. This is done differently depending of the framing type (synchronous or asynchronous, see below).
Synchronous Framing
To prevent 0x7E from ever occurring in the packet data, bit stuffing is used. Every time 5 consecutive 0‘s are transmitted, a 1 is inserted. The receiver must also look for 5 consecutive 0‘s and then remove the following 1.
Asynchronous Framing
Bit-stuffing is inconvenient when the communication protocol requires bits to be sent in groups such as bytes (e.g. RS-232). In this case, control-octet transparency is used instead (also called byte stuffing or octet stuffing).
With control-octet transparency, if either 0x7E (the frame boundary octet) or 0x7D (the control escape octet) appears in the packet data, the escape octet is first sent, followed by the original data, but with bit 5 inverted. This ensures that 0x7E is never found withing the packet data, and is only used for packet framing.
Obviously, the receiver has to look for escape octets, remove them from the data stream, and invert bit 5 of the next byte.
1. Normal Response Mode (NRM): A unbalanced configuration where only the primary terminal may initiate a data transfer.
2. Asynchronous Response Mode (ARM): An unbalanced configuration where secondary terminals may transmit without permission from the primary.
3. Asynchronous Balanced Mode (ABM): A balanced configuration in where all terminals are treated equally, and any may send frames at any time.
Standards
The HLDC protocol is specified in ISO 13239.
|
2020-07-09 03:54:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37183305621147156, "perplexity": 5741.878831617043}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655898347.42/warc/CC-MAIN-20200709034306-20200709064306-00515.warc.gz"}
|
http://www.orangeyoulucky.blogspot.com/2010/03/today.html
|
## Thursday, March 11, 2010
### today...
Today is dedicate at large to the administrative work I dread so so much (but it needs to be done and gone). The silver lining, however, is that I decided to read 'Desing as Art' book by Bruno Munari (in between all the boring monotonous paper organizing)
Very interesting so far:) I wish I had this book 10 years ago...
|
2014-07-24 12:59:55
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8697198629379272, "perplexity": 6164.000038393609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997888972.38/warc/CC-MAIN-20140722025808-00167-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://bdiemer.bitbucket.io/sparta/analysis.html
|
Analyzing SPARTA output¶
While SPARTA is running, it produces Console output that one should at least partially understand before using the outputs of SPARTA.
A SPARTA run produces a single HDF5 output file. The contents of this file are documented in The SPARTA HDF5 output format, and depend on the chosen Results & Analyses as well as the compile-time output settings (see Compiling SPARTA as well as the documentation about the different sub-modules).
The HDF5 file can be read using any HDF5 utility, but in practice matching the different datasets can be tedious. Thus, SPARTA comes with a much more convenient Python analysis package that reads a SPARTA file and translates the selected data into python dictionaries. This utility is the recommended way to read SPARTA files.
However, in many cases, the desired output is not the SPARTA file itself but an augmented halo catalog that can be created with the MORIA extension (see Creating halo catalogs with MORIA).
Regardless of how the SPARTA and/or MORIA files are read and analyzed, they use the common set of abbreviations for tracers, results, and analyses:
Type
Incarnation
Long name
Abbreviation
Tracer
tracer
tcr
Tracer
Particles
particles
ptl
Tracer
Subhalos
subhalos
sho
Result
result
res
Result
Infall
infall
ifl
Result
Splashsback
splashback
sbk
Result
Trajectory
trajectory
tjy
Result
OrbitCount
orbitcount
oct
Analysis
analysis
anl
Analysis
rsp
rsp
Analysis
DensityProfiles
profiles
prf
Analysis
HaloProperties
haloprops
hps
Contents
Previous topic
Halo properties analysis
Console output
|
2023-02-06 00:43:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49205824732780457, "perplexity": 3869.900941439014}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500294.64/warc/CC-MAIN-20230205224620-20230206014620-00697.warc.gz"}
|
https://tex.stackexchange.com/questions/408602/how-can-i-draw-this-a-tree-diagram-with-notes-on-arrows-using-forest
|
# How can I draw this (a tree diagram with notes on arrows) using forest?
Currently I can only write
\documentclass{article}
\usepackage{forest}
\begin{document}
\begin{forest}
for tree = {edge={->}}
[$a$
[$3a+1$]
[$\frac{a}{2}$]
]
\end{forest}
\end{document}
which produces a similar tree to the model, just without little words beside the arrows. Thanks.
• @TeXnician You mean my code snippet has errors? – Felix Fourcolor Jan 3 '18 at 10:46
• No, but we appreciate helping if we have a compilable code to copy and paste that runs, i.e. we do not have to guess documentclass, packages etc. (see also tex.meta.stackexchange.com/a/3225/124577) – TeXnician Jan 3 '18 at 10:49
Assign a name to each node, then use usual tikz commands with them. I did it like the following:
\documentclass{article}
\usepackage{forest}
\begin{document}
\begin{forest}
for tree = {edge={->}}
[$a$, name=root
[$3a+1$, name=left] {\path (root)-- node [rotate=65, yshift=3pt] {\tiny left} (left) ;}
[$\frac{a}{2}$, name=right] {\path (root)-- node [rotate=-65, yshift=3pt] {\tiny right} (right) ;}
]
\end{forest}
\end{document}
Another solution to use the istgame package:
\documentclass{standalone}
\usepackage{istgame}
\begin{document}
\begin{istgame}[->,sloped]
\xtdistance{20mm}{20mm}
\istroot(0)[null node]{$a$}
\istb{\mbox{odd}}[a]{3a+1}
\istb{\mbox{even}}[a]{\frac a2}
\endist
\end{istgame}
\end{document}
|
2019-10-20 01:22:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.922788143157959, "perplexity": 10953.402262721469}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986700560.62/warc/CC-MAIN-20191020001515-20191020025015-00094.warc.gz"}
|
https://www.bartleby.com/questions-and-answers/acyl-transfer-nucleophilic-substitution-at-carbonyl-reactions-proceed-in-two-stages-via-a-tetrahedra/ee330758-8eaa-4f74-9a93-10d308f26218
|
Acyl transfer (nucleophilic substitution at carbonyl) reactions proceed in two stages via a "tetrahedral intermediate." Draw the tetrahedral intermediate as it is first formed in the following reaction он H2SO4 (trace) +CH3OH reflux You do not have to consider stereochemistry. Include all valence lone pairs in your answer. . Do not include counter-ions, eg, Na+, 1, in your answer. In cases where there is more than one answer, just draw one.
Question
Acyl transfer (nucleophilic substitution at carbonyl) reactions proceed in two stages via a "tetrahedral intermediate." Draw the tetrahedral intermediate as it is first formed in the following reaction.
• You do not have to consider stereochemistry.
|
2021-05-09 07:14:12
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.838822603225708, "perplexity": 4366.547330314701}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988961.17/warc/CC-MAIN-20210509062621-20210509092621-00475.warc.gz"}
|
https://tex.stackexchange.com/questions/439470/add-href-in-subsection-make-the-url-become-capital
|
# Add \href in \subsection{} make the url become capital
I add \href in \subsection{}, this make the url become all capital, how can I deal with this?
For example, I use \subsection{\href{http://finance.sina.com.cn/roll/2018-06-30/doc-iheqpwqz3115534.shtml}{some text}} and run latex, it shows like this. And It contains an url.
Then when I click the link, the url becomes http://finance.sina.com.cn/ROLL/2018-06-30/DOC-IHEQPWQZ3115534.SHTML like this: in the web-browser. All the letters become capital, this url cannot open the right website.
What can I do to fix this? I want the url can open the right website. Thank you.
• Copying the example in your post body, I do not replicate the behaviour you describe. Please could you produce a Minimum Working Example which reproduces the problem? – preferred_anon Jul 6 '18 at 7:36
• An example that could reproduce this is \documentclass[british]{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{babel} \usepackage{hyperref} \begin{document} \MakeUppercase{\href{http://example.com/lowercase}{some text}} \uppercase{\href{http://example.com/lowercase}{some text}} \end{document} – moewe Jul 7 '18 at 12:21
• Maybe tex.stackexchange.com/q/274734/35864 can help – moewe Jul 7 '18 at 12:22
• The work-around in the linked question seems to work here is well, \documentclass[british]{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{babel} \usepackage{hyperref} \protected\def\mylink{http://example.com/lowercase} \begin{document} \MakeUppercase{\href{\mylink}{some text}} \uppercase{\href{\mylink}{some text}} \end{document} but maybe there is something cleverer. – moewe Jul 7 '18 at 12:24
• Sir, but if you use \href in \subsection{} like \subseciont{\href{}{}}, all the letters will become upper case. – Felix LL Jul 7 '18 at 12:45
|
2019-04-23 15:51:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9097546339035034, "perplexity": 7232.785804495277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578605555.73/warc/CC-MAIN-20190423154842-20190423180842-00327.warc.gz"}
|
http://math.stackexchange.com/questions/131593/problem-on-multiplicative-subset
|
# Problem on multiplicative subset
Let $R$ be a ring, $S$ is a multiplicative subset of $R$. $a$ is an arbitrary element of $S$. Should there be 2 element $b,c \in S, b, c \neq 1$ such that $a=b.c$?
If not please give a counter example. Thank you very much!
-
Are you assuming that $S$ contains $1$ in your definition? – azarel Apr 14 '12 at 5:01
@azarel : Thanks. I have just edited. – Arsenaler Apr 14 '12 at 5:07
No, there is no such requirement. Consider $R=\mathbb{Z}$ and let $S$ be the set of all positive integers not divisible by $2$. This is a multiplicative set, and contains $3$. However, any expression of $3$ as a product $3=bc$ with $b$ and $c$ positive integers will have $b=1$ or $c=1$.
Consider $R=\Bbb Z$ and $S=a\Bbb Z$ with $a\not\in\{-1,0,1\}$, especially in light of prime factorization.
More specifically, say we take $R$ to be the integers and the multiples of some nonzero nonunit $a$ to be our multiplicative subset $S$. For a quick example, let $a=2$ and so $S$ is the set of even numbers closed under multiplication. Can $2$ be written as the product of two even numbers?
|
2016-05-30 12:40:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9097082614898682, "perplexity": 96.79613595798341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051002346.10/warc/CC-MAIN-20160524005002-00139-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://byjus.com/question-answer/the-profit-earned-by-selling-an-article-for-rs-900-is-double-the-loss-incurred/
|
Question
# The profit earned by selling an article for Rs. $$900$$ is double the loss incurred when the same article is sold for Rs. $$450$$. At what price should the article be sold to make $$25$$% profit?
A
Rs. 750
B
Rs. 800
C
Rs. 600
D
Rs. 900
Solution
## The correct option is A Rs. $$750$$Let the C.P. be Rs. $$x$$. Then, $$900 - x = 2(x - 450)$$$$\Rightarrow 900-x=2x-900$$$$\Rightarrow 3x=1800$$$$\Rightarrow x=600$$i.e., C.P. $$=$$ Rs. $$600$$, Profit $$= 25$$ $$\%$$$$\therefore$$ S.P. of the article $$=$$ Rs. $$\left (\dfrac {600\times 125}{100}\right )=$$ Rs. $$750$$.Mathematics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More
|
2022-01-18 04:25:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9691222310066223, "perplexity": 5187.765021504328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300722.91/warc/CC-MAIN-20220118032342-20220118062342-00334.warc.gz"}
|
https://hk.edugain.com/questions/If-the-altitudes-from-the-two-vertices-of-a-triangle-to-the-opposite-sides-are-equal-prove-that-the-triangle-is-isosceles
|
### If the altitudes from the two vertices of a triangle to the opposite sides are equal, prove that the triangle is isosceles.
1. Let $BD$ and $CE$ be the altitudes from the vertices $B$ and $D$ of $\triangle ABC.$
As the altitude of a triangle is perpendicular to the opposite side, we have $$BD \perp AC \text { and } CE \perp AB$$ Also, we are told that the altitudes are of equal length. $$\implies BD = CE$$
2. We need to prove that $AB = AC.$
3. In $\triangle ADB$ and $\triangle AEC$, we have \begin{aligned} &BD = CE &&[\text{Given}] \\ &\angle BAD = \angle CAE &&[\text{Common}]\\ &\angle ADB = \angle AEC = 90^ \circ &&[ BD \perp AC \text { and } CE \perp AB] \\ \therefore \space &\triangle ADB \cong \triangle AEC &&[\text{By AAS criterion}] \end{aligned}
4. As the corresponding parts of congruent triangles are equal, we have $AB = AC .$
Hence, $\triangle ABC$ is isosceles.
|
2023-04-01 08:22:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9993292093276978, "perplexity": 790.5467965314915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.56/warc/CC-MAIN-20230401063607-20230401093607-00337.warc.gz"}
|
https://math.stackexchange.com/questions/2493699/controllable-realisation-of-fracs414s42s32s1-is-both-controllable
|
# Controllable realisation of $\frac{s^4+1}{4s^4+2s^3+2s+1}$ is both controllable and observable?
I am trying to find the controllable realization of the following transfer function:
$$H(s)=\frac{s^4+1}{4s^4+2s^3+2s+1}$$
I approach this by first using polynomial division to assure that $H(s)$ is strictly proper. This yields:
$$H(s)=1/4+\frac{-1/8\cdot s^3-1/8\cdot s+3/16}{s^4+1/2\cdot s^3+1/2\cdot s+1/4}$$
From this I can extract the controllable canonical form where the matrices A, B, C and D are:
$$A=\begin{bmatrix}0 & 1 & 0 & 0\\ 0 & 0 & 1& 0 \\ 0 & 0 & 0 & 1\\-1/4 & -1/2 & 0 & -1/2\end{bmatrix}$$
$$B=\begin{bmatrix}0\\0\\0\\1\end{bmatrix}$$ $$C=\begin{bmatrix}3/16 &-1/8 &0 &-1/8\end{bmatrix}$$ $$D=1/4$$
From this I am confused because when I construct the observability and controllability matrices I found that the system is both controllable and observable (both matrices have full rank). This seems like a contradiction. Can it be possible? What am I doing wrong?
• Why should a system not be both controllable and observable? – SampleTime Oct 28 '17 at 14:06
• Well I thought that when a system is in controllable canonical form then it must be controllable, but unobservable. – john melon Oct 28 '17 at 14:07
• Why do you think that your original $H(s)$ is not controllable? – Kwin van der Veen Oct 28 '17 at 18:27
• A state space model in the controllable canonical form is indeed controllable, however it could be unobservable it doesn't have to be. – Kwin van der Veen Oct 28 '17 at 20:17
• If you don't have pole-zero cancellation in your transfer function, then every controllable realization is a minimal realization, thus an observable one. (and vice versa) – polfosol Oct 31 '17 at 13:53
|
2019-05-22 11:44:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7685502171516418, "perplexity": 337.7940460115423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256797.20/warc/CC-MAIN-20190522103253-20190522125253-00478.warc.gz"}
|
https://en.wikipedia.org/wiki/Laplacian_distribution
|
Laplace distribution
(Redirected from Laplacian distribution)
Parameters Probability density function Cumulative distribution function ${\displaystyle \mu }$ location (real) ${\displaystyle b>0}$ scale (real) ${\displaystyle x\in (-\infty ;+\infty )\,}$ ${\displaystyle {\frac {1}{2\,b}}\exp \left(-{\frac {|x-\mu |}{b}}\right)\,}$ ${\displaystyle {\begin{cases}{\frac {1}{2}}\exp \left({\frac {x-\mu }{b}}\right)&{\mbox{if }}x<\mu \\[8pt]1-{\frac {1}{2}}\exp \left(-{\frac {x-\mu }{b}}\right)&{\mbox{if }}x\geq \mu \end{cases}}}$ ${\displaystyle \mu }$ ${\displaystyle \mu }$ ${\displaystyle \mu }$ ${\displaystyle 2b^{2}}$ ${\displaystyle 0}$ ${\displaystyle 3}$ ${\displaystyle \log(2be)}$ ${\displaystyle {\frac {\exp(\mu \,t)}{1-b^{2}\,t^{2}}}\,\!{\text{ for }}|t|<1/b\,}$ ${\displaystyle {\frac {\exp(\mu \,i\,t)}{1+b^{2}\,t^{2}}}\,\!}$
In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. It is also sometimes called the double exponential distribution, because it can be thought of as two exponential distributions (with an additional location parameter) spliced together back-to-back, although the term 'double exponential distribution' is also sometimes used to refer to the Gumbel distribution. The difference between two independent identically distributed exponential random variables is governed by a Laplace distribution, as is a Brownian motion evaluated at an exponentially distributed random time. Increments of Laplace motion or a variance gamma process evaluated over the time scale also have a Laplace distribution.
Characterization
Probability density function
A random variable has a ${\displaystyle {\textrm {Laplace}}(\mu ,b)}$ distribution if its probability density function is
${\displaystyle f(x\mid \mu ,b)={\frac {1}{2b}}\exp \left(-{\frac {|x-\mu |}{b}}\right)\,\!}$
${\displaystyle ={\frac {1}{2b}}\left\{{\begin{matrix}\exp \left(-{\frac {\mu -x}{b}}\right)&{\text{if }}x<\mu \\[8pt]\exp \left(-{\frac {x-\mu }{b}}\right)&{\text{if }}x\geq \mu \end{matrix}}\right.}$
Here, ${\displaystyle \mu }$ is a location parameter and ${\displaystyle b>0}$, which is sometimes referred to as the diversity, is a scale parameter. If ${\displaystyle \mu =0}$ and ${\displaystyle b=1}$, the positive half-line is exactly an exponential distribution scaled by 1/2.
The probability density function of the Laplace distribution is also reminiscent of the normal distribution; however, whereas the normal distribution is expressed in terms of the squared difference from the mean ${\displaystyle \mu }$, the Laplace density is expressed in terms of the absolute difference from the mean. Consequently, the Laplace distribution has fatter tails than the normal distribution.
Differential equation
The pdf of the Laplace distribution is a solution of the following differential equation:
${\displaystyle {\begin{cases}\left\{{\begin{array}{l}bf'(x)+f(x)=0\\[8pt]f(0)={\frac {e^{\frac {\mu }{b}}}{2b}}\end{array}}\right\}&{\text{if }}x\geq \mu \\[8pt]\left\{{\begin{array}{l}bf'(x)-f(x)=0\\[8pt]f(0)={\frac {e^{-{\frac {\mu }{b}}}}{2b}}\end{array}}\right\}&{\text{if }}x<\mu \end{cases}}}$
Cumulative distribution function
The Laplace distribution is easy to integrate (if one distinguishes two symmetric cases) due to the use of the absolute value function. Its cumulative distribution function is as follows:
{\displaystyle {\begin{aligned}F(x)&=\int _{-\infty }^{x}\!\!f(u)\,\mathrm {d} u={\begin{cases}{\frac {1}{2}}\exp \left({\frac {x-\mu }{b}}\right)&{\mbox{if }}x<\mu \\1-{\frac {1}{2}}\exp \left(-{\frac {x-\mu }{b}}\right)&{\mbox{if }}x\geq \mu \end{cases}}\\&={\tfrac {1}{2}}+{\tfrac {1}{2}}\operatorname {sgn}(x-\mu )\left(1-\exp \left(-{\frac {|x-\mu |}{b}}\right)\right).\end{aligned}}}
The inverse cumulative distribution function is given by
${\displaystyle F^{-1}(p)=\mu -b\,\operatorname {sgn}(p-0.5)\,\ln(1-2|p-0.5|).}$
Generating random variables according to the Laplace distribution
Given a random variable ${\displaystyle U}$ drawn from the uniform distribution in the interval ${\displaystyle \left(-1/2,1/2\right]}$, the random variable
${\displaystyle X=\mu -b\,\operatorname {sgn}(U)\,\ln(1-2|U|)}$
has a Laplace distribution with parameters ${\displaystyle \mu }$ and ${\displaystyle b}$. This follows from the inverse cumulative distribution function given above.
A ${\displaystyle {\textrm {Laplace}}(0,b)}$ variate can also be generated as the difference of two i.i.d. ${\displaystyle {\textrm {Exponential}}(1/b)}$ random variables. Equivalently, ${\displaystyle {\textrm {Laplace}}(0,1)}$ can also be generated as the logarithm of the ratio of two i.i.d. uniform random variables.
Parameter estimation
Given ${\displaystyle N}$ independent and identically distributed samples ${\displaystyle x_{1},x_{2},...,x_{N}}$, the maximum likelihood estimator ${\displaystyle {\hat {\mu }}}$ of ${\displaystyle \mu }$ is the sample median,[1] and the maximum likelihood estimator ${\displaystyle {\hat {b}}}$ of ${\displaystyle b}$ is
${\displaystyle {\hat {b}}={\frac {1}{N}}\sum _{i=1}^{N}|x_{i}-{\hat {\mu }}|}$
(revealing a link between the Laplace distribution and least absolute deviations).
Moments
${\displaystyle \mu _{r}'={\bigg (}{\frac {1}{2}}{\bigg )}\sum _{k=0}^{r}{\bigg [}{\frac {r!}{(r-k)!}}b^{k}\mu ^{(r-k)}\{1+(-1)^{k}\}{\bigg ]}={\frac {m^{n+1}}{2b}}\left(e^{m/b}E_{-n}(m/b)-e^{-m/b}E_{-n}(-m/b)\right)}$
where ${\displaystyle E_{n}()}$ is the generalized exponential integral function ${\displaystyle E_{n}(x)=x^{n-1}\Gamma (1-n,x)}$.
Related distributions
• If ${\displaystyle X\sim {\textrm {Laplace}}(\mu ,b)}$ then ${\displaystyle kX+c\sim {\textrm {Laplace}}(k\mu +c,kb)}$.
• If ${\displaystyle X\sim {\textrm {Laplace}}(0,b)}$ then ${\displaystyle \left|X\right|\sim {\textrm {Exponential}}\left(b^{-1}\right)}$. (Exponential distribution)
• If ${\displaystyle X,Y\sim {\textrm {Exponential}}(\lambda )}$ then ${\displaystyle X-Y\sim {\textrm {Laplace}}\left(0,\lambda ^{-1}\right)}$
• If ${\displaystyle X\sim {\textrm {Laplace}}(\mu ,b)}$ then ${\displaystyle \left|X-\mu \right|\sim {\textrm {Exponential}}(b^{-1})}$.
• If ${\displaystyle X\sim {\textrm {Laplace}}(\mu ,b)}$ then ${\displaystyle X\sim {\textrm {EPD}}(\mu ,b,0)}$. (Exponential power distribution)
• If ${\displaystyle X_{1},...,X_{4}\sim {\textrm {N}}(0,1)}$ (Normal distribution) then ${\displaystyle X_{1}X_{2}-X_{3}X_{4}\sim {\textrm {Laplace}}(0,1)}$.
• If ${\displaystyle X_{i}\sim {\textrm {Laplace}}(\mu ,b)}$ then ${\displaystyle {\frac {\displaystyle 2}{b}}\sum _{i=1}^{n}|X_{i}-\mu |\sim \chi ^{2}(2n)}$. (Chi-squared distribution)
• If ${\displaystyle X,Y\sim {\textrm {Laplace}}(\mu ,b)}$ then ${\displaystyle {\tfrac {|X-\mu |}{|Y-\mu |}}\sim \operatorname {F} (2,2)}$. (F-distribution)
• If ${\displaystyle X,Y\sim {\textrm {U}}(0,1)}$ (Uniform distribution) then ${\displaystyle \log(X/Y)\sim {\textrm {Laplace}}(0,1)}$.
• If ${\displaystyle X\sim {\textrm {Exponential}}(\lambda )}$ and ${\displaystyle Y\sim {\textrm {Bernoulli}}(0.5)}$ (Bernoulli distribution) independent of ${\displaystyle X}$, then ${\displaystyle X(2Y-1)\sim {\textrm {Laplace}}\left(0,\lambda ^{-1}\right)}$.
• If ${\displaystyle X\sim {\textrm {Exponential}}(\lambda )}$ and ${\displaystyle Y\sim {\textrm {Exponential}}(\nu )}$ independent of ${\displaystyle X}$, then ${\displaystyle \lambda X-\nu Y\sim {\textrm {Laplace}}(0,1)}$
• If ${\displaystyle X}$ has a Rademacher distribution and ${\displaystyle Y\sim {\textrm {Exponential}}(\lambda )}$ then ${\displaystyle XY\sim {\textrm {Laplace}}(0,1/\lambda )}$.
• If ${\displaystyle V\sim {\textrm {Exponential}}(1)}$ and ${\displaystyle Z\sim N(0,1)}$ independent of ${\displaystyle V}$, then ${\displaystyle X=\mu +b{\sqrt {2V}}Z\sim \mathrm {Laplace} (\mu ,b)}$.
• If ${\displaystyle X\sim {\textrm {GeometricStable}}(2,0,\lambda ,0)}$ (geometric stable distribution) then ${\displaystyle X\sim {\textrm {Laplace}}(0,\lambda )}$.
• The Laplace distribution is a limiting case of the hyperbolic distribution.
• If ${\displaystyle X|Y\sim {\textrm {N}}(\mu ,\sigma =Y)}$ with ${\displaystyle Y\sim {\textrm {Rayleigh}}(b)}$ (Rayleigh distribution) then ${\displaystyle X\sim {\textrm {Laplace}}(\mu ,b)}$.
Relation to the exponential distribution
A Laplace random variable can be represented as the difference of two iid exponential random variables.[2] One way to show this is by using the characteristic function approach. For any set of independent continuous random variables, for any linear combination of those variables, its characteristic function (which uniquely determines the distribution) can be acquired by multiplying the corresponding characteristic functions.
Consider two i.i.d random variables ${\displaystyle X,Y\sim {\textrm {Exponential}}(\lambda )}$. The characteristic functions for ${\displaystyle X,-Y}$ are
${\displaystyle {\frac {\lambda }{-it+\lambda }},\quad {\frac {\lambda }{it+\lambda }}}$
respectively. On multiplying these characteristic functions (equivalent to the characteristic function of the sum of therandom variables ${\displaystyle X+(-Y)}$), the result is
${\displaystyle {\frac {\lambda ^{2}}{(-it+\lambda )(it+\lambda )}}={\frac {\lambda ^{2}}{t^{2}+\lambda ^{2}}}.}$
This is the same as the characteristic function for ${\displaystyle Z\sim {\textrm {Laplace}}(0,1/\lambda )}$, which is
${\displaystyle {\frac {1}{1+{\frac {t^{2}}{\lambda ^{2}}}}}.}$
Sargan distributions
Sargan distributions are a system of distributions of which the Laplace distribution is a core member. A ${\displaystyle p}$th order Sargan distribution has density[3][4]
${\displaystyle f_{p}(x)={\tfrac {1}{2}}\exp(-\alpha |x|){\frac {\displaystyle 1+\sum _{j=1}^{p}\beta _{j}\alpha ^{j}|x|^{j}}{\displaystyle 1+\sum _{j=1}^{p}j!\beta _{j}}},}$
for parameters ${\displaystyle \alpha \geq 0,\beta _{j}\geq 0}$. The Laplace distribution results for ${\displaystyle p=0}$.
Applications
The Laplacian distribution has been used in speech recognition to model priors on DFT coefficients [5] and in JPEG image compression to model AC coefficients [6] generated by a DCT.
The addition of noise drawn from a Laplacian distribution, with scaling parameter appropriate to a function's sensitivity, to the output of a statistical database query is the most common means to provide differential privacy in statistical databases.
In regression analysis, the least absolute deviations estimate arises as the maximum likelihood estimate if the errors have a Laplace distribution.
The Lasso can be thought of as a Bayesian regression with a Laplacian prior.
History
This distribution is often referred to as Laplace's first law of errors. He published it in 1774 when he noted that the frequency of an error could be expressed as an exponential function of its magnitude once its sign was disregarded.[7][8]
Keynes published a paper in 1911 based on his earlier thesis wherein he showed that the Laplace distribution minimised the absolute deviation from the median.[9]
|
2017-04-26 22:11:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 92, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9596320390701294, "perplexity": 405.63055437883816}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121665.69/warc/CC-MAIN-20170423031201-00435-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://gsebsolutions.com/gseb-class-12-statistics-notes-part-2-chapter-5/
|
# GSEB Class 12 Statistics Notes Part 2 Chapter 5 Differentiation
This GSEB Class 12 Commerce Statistics Notes Part 2 Chapter 5 Differentiation Posting covers all the important topics and concepts as mentioned in the chapter.
## Differentiation Class 12 GSEB Notes
Differentiation and Derivative:
Differentiation:
A method of studying the rate of change in the dependent variable (function of independent variable) with respect to small change in the value of independent variable is called differentiation.
The process of obtaining derivative of a function is called differentiation.
Relative Change:
y = f(x). When the value of x changes from a to a + h, the value of f(x) will change from f(a) to f(a + h). Thus, for a change of h in the value of x, there is a change of f(a + h) – f(a) in the value of f(x). So, for change of h in the value of a, the relative change in the function will be $$\frac{f(a+h)-f(a)}{h}$$
Derivative: Let f: A → R and a e A; where, A is an open interval of R. If h is made very small and
exists, then this limit of a function f is called derivative at x = a. It is denoted by f'(a).
If y = f(x). then f’ (x) is denoted by $$\frac{d y}{d x}$$
Some Standard Derivatives:
The following standard derivatives are used to obtain the derivative of functions directly without using the definition of derivative:
• If y = xn, then $$\frac{d y}{d x}$$ = n. xn-1
• If y = k, then $$\frac{d y}{d x}$$ = 0; k = constant
Second Order Derivative: If y = f(x), then $$\frac{d y}{d x}$$ = f'(x), is called first order derivative. The derivative of the first order derivative is called second order derivative. It is denoted by $$\frac{d^{2} y}{d x^{2}}$$ OR f”(x).
Thus, $$\frac{d^{2} y}{d x^{2}}=\frac{d}{d x}\left[\frac{d y}{d x}\right]$$
Or
f”(x) = $$\frac{d}{d x}$$[f'(x)]
It is used for mlnhnisatlon of cost function, maximisation of revenue function and maximisation of profit function.
Increasing Function and Decreasing Function:
1. Increasing Function: y = f(x). If h is a very small positive number and f(a + h) >f(a) and f(a)<f(a-h), then at x = a, the function f(x) is called increasing function. That is, if at x = a, the function f(x) is increasing function then f(a) > 0.
2. Decreasing Function: y = fx). If h is a very small positive number and f(a + h) < f(a) and f(a) < f(a-h), then at x = a, the function f(x) is called decreasing function. That is, if at x = a, the function fix) is decreasing function then f(a) < 0.
Maximum and Minimum Values of a Function:
Maximum Value of a Function:
y = f(x). If h is a very small positive number and f(a) > f(a + h) and f(a) > f(a – h), then at x = a, the function f(x) is called maximum.
The conditions for maximisation of function at x = a:
• Necessary condition: f(a) = 0
• Sufficient condition: f”(a) < 0 (Negative)
Minimum Value of a Function:
y = f(x). If h is . a very small positive number and f(a) < f(a + h) and f(a) < f(a – h), then at x = a, the function f(x) is called minimum.
The condition for minimisation of function at x = a:
• Necessary condition: f(a) = 0
• Sufficient condition: f(a) > 0 (Positive)
Marginal Revenue and Marginal Cost:
Marginal Revenue:
The change in total revenue with respect to a small change in demand of an item is called marginal revenue.
• x = Demand of an item, p = Price of an item, then total revenue R = x . p
• Marginal revenue is obtained by finding derivative of R with respect to x.
Thus, marginal revenue = $$\frac{d \mathrm{R}}{d x}$$
Marginal Cost:
• The change in total cost with respect to a small change in production is called marginal cost.
• x = Production of an item, C = Production cost, then finding derivative of C with respect to x, marginal cost is obtained.
Thus, marginal cost = $$\frac{d \mathrm{C}}{d x}$$
Elasticity of Demand:
The ratio of percentage change in demand and the percentage change in price is called elasticity of demand. Thus,
Elasticity of demand = $$=-\frac{\text { Percentage change in demand }}{\text { Percentage change in price }}$$
• The demand of an item x is a function of the price of an item p, i.e., x = f(p).
• The relation between demand of an item and its price is inverse. Hence, the negative sign is taken in the formula of elasticity of demand.
• Demand x is a function of price p. Hence, taking the derivative of x with respect to p the elasticity of demand is obtained as follows: Elasticity of demand = $$-\frac{p}{x} \cdot \frac{d x}{d p}$$
Minimisation of Cost:
x = Units of production, C = Production cost. The conditions for minimisation of cost are as follows:
• Necessary condition: $$\frac{d \mathrm{C}}{d x}$$ = 0
• Sufficient condition: $$\frac{d^{2} \mathrm{C}}{d x^{2}}$$ > 0 (Positive)
Maximisation of Revenue:
x Demand of an item, p = Price per unit of an item, Revenue R = x . p.
The conditions for maximisation of revenue are as follows:
• Necessary condition: $$\frac{d \mathrm{R}}{d x}$$ = 0
• Sufficient condition: $$\frac{d^{2} \mathrm{R}}{d x^{2}}$$ < 0 (Negative)
Maximisation of Profit:
If R = Total revenue, C = Total production cost, then profit P = R – C.
The conditions for maximisation of profit are as follows:
• Necessary condition: $$\frac{d \mathrm{P}}{d x}$$ = 0
• Sufficient condition: $$\frac{d^{2} \mathrm{P}}{d x^{2}}$$ < 0 (Negative)
Stationary Points of a Function:
y = f(x). Points at which the maximum and minimum values of f(x) are obtained are called the stationary points of function.
Necessary condition for stationary points: f'(x) = 0
Important Formulae:
1. Derivative:
By Definition:
y = f(x).
By standard Forms:
• y = f(x) = xn $$\frac{d y}{d x}$$OR f'(x) = nxn-1
• y = f(x) = k; $$\frac{d y}{d x}$$ OR f'(x) = o
Where, k = constant
2. Rules for Differentiation: u and y are differentiable functions of x.
• Rule of additIon – subtraction:
If y = u ± v, then $$\frac{d y}{d x}=\frac{d u}{d x} \pm \frac{d v}{d x}$$
• Rule of multiplication:
If y = u . u. then $$\frac{d y}{d x}=u \cdot \frac{d v}{d x}+v \cdot \frac{d u}{d x}$$
• Rule of Division:
If y = $$\frac{u}{v}$$; u ≠ 0. then $$\frac{d y}{d x}=\frac{v \cdot \frac{d u}{d x}-u \cdot \frac{d v}{d x}}{v^{2}}$$
• Chain Rule: If y is a function of u and u is a function of x, then $$\frac{d y}{d x}=\frac{d y}{d u} \cdot \frac{d u}{d x}$$
3. Increasing and Decreasing Function:
• At x = a, for increasing function f'(a) > 0
• At x = a, for decreasing function f'(a) < 0
4. Conditions for maximum value of a function:
For the function f(x) to be maximum at x = a
• Necessary condition: f'(a) = 0
• Sufficient condition: f”(a) < 0 (Negative)
5. Conditions for minimum value of a function:
For the function f(x) to be minimum at x = a
• Necessary condition: f'(a) = 0
• Sufficient condition: f”(a) > 0 (Positive)
6. Marginal Revenue = $$\frac{d \mathrm{R}}{d x}$$
R = x .p, x = Demand, p = Price
7. Marginal cost = $$\frac{d \mathrm{C}}{d x}$$
C = Production cost, x = Production
8. Elasticity of demand = $$-\frac{p}{x} \cdot \frac{d x}{d p}$$
p = Price, x = Demand
9. Conditions for Minimum Cost:
• Necessary condition: $$\frac{d \mathrm{C}}{d x}$$ = 0
• Sufficient condition: $$\frac{d^{2} \mathrm{C}}{d x^{2}}$$ > 0
Where, C = Production cost, x = Production
10. Conditions for Maximum Revenue:
• Necessary condition: $$\frac{d \mathrm{R}}{d x}$$ = 0
• Sufficient condition: $$\frac{d^{2} \mathrm{R}}{d x^{2}}$$ < 0
Where, R = xp; x = Demand, p = Price
11. Conditions for Maximum Profit:
• Necessary condition $$\frac{d \mathrm{P}}{d x}$$ = 0
• Sufficient condition: $$\frac{d^{2} \mathrm{P}}{d x^{2}}$$ < 0
Where, P = R – C; P = Profit, R = Total revenue, C = Total cost ,
12. The necessary condition for stationary points of a function:
$$\frac{d y}{d x}$$ = 0 OR f'(x) = 0
Remember for Derivative of Function
1. y = xn $$\frac{d y}{d x}$$ = n. xn-1 2. y = x $$\frac{d y}{d x}$$ = 1 3. y = x2 $$\frac{d y}{d x}$$ = 2x 4. y = x3 $$\frac{d y}{d x}$$ = 3x2 5. y = c $$\frac{d y}{d x}$$ = 0, c = constant 6. y = ax $$\frac{d y}{d x}$$ = a, a = constant 7. y = $$\frac{1}{x}$$ $$\frac{d y}{d x}=-\frac{1}{x^{2}}$$, c = constant 8. y = $$\frac{c}{x}$$ $$\frac{d y}{d x}=-\frac{c}{x^{2}}$$; c = constant 9. y = $$\frac{c}{a x-b}$$ $$\frac{d y}{d x}=-\frac{a c}{(a x-b)^{2}}$$; a, b, c = constant 10. y = $$\frac{c}{a-bx}$$ $$\frac{d y}{d x}=\frac{b c}{(a-b x)^{2}}$$; a, b, c = constant 11. y = $$\frac{c}{z}$$ $$\frac{d y}{d z}=-\frac{c}{z^{2}}$$; c = constant
|
2023-03-23 20:55:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7615947723388672, "perplexity": 1011.9262181965936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945183.40/warc/CC-MAIN-20230323194025-20230323224025-00519.warc.gz"}
|
https://www.physicsforums.com/threads/universal-gravitation-and-neutron-stars.359614/
|
# Universal Gravitation and neutron stars
1. Dec 1, 2009
### 1st2fall
1. The problem statement, all variables and given/known data
Certain neutron stars (extremely dense stars) are believed to be rotating at about 6 rev/s. If such a star has a radius of 15 km, what must be its minimum mass so that material on its surface remains in place during the rapid rotation?
G=6.67*10-11m3 kg-1 s-2
2. Relevant equations
$$F_c{}=\frac{mv^{2}}{r}$$
$$F_g{}=\frac{GM_1{M_2{}}}{r^{2}}$$
3. The attempt at a solution
30$$\pi$$km*6rev/s=180$$\pi$$km/s
which gives the linear velocity of something on the surface of the neutron star.... but I'm clueless as to how to arrive at a mass of the star from it. I could the Centripetal acceleration but I'm not sure how that's related here. the only thing I can think of there is setting the centripetal acceleration equal to the gravitation acceleration which gives
Gm/r2=v2/r
m=v2r/G
which yields something like 7.28635682 × 10^24(kg?)
I'm just looking for advice on what I'm actually looking to do. I don't know what I should be looking for....
2. Dec 2, 2009
### ehild
Your formula is correct, but check the calculation.
ehild
3. Dec 2, 2009
### 1st2fall
Thank you very much! I realized a dropped a "pi" at some point when I went back over it.
|
2017-08-22 06:23:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5923635959625244, "perplexity": 637.3202508751831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110471.85/warc/CC-MAIN-20170822050407-20170822070407-00419.warc.gz"}
|
https://socratic.org/questions/what-are-the-asymptotes-of-f-x-1-x-10-1-x-20
|
What are the asymptotes of f(x)=(1/(x-10))+(1/(x-20))?
Apr 10, 2018
y=0 if x =>+-oo, f(x) =-oo if x=>10^-, f(x)=+oo if x=>10^+, f(x) =-oo if x=>20^-, f(x)=+oo if x=>20^+
Explanation:
$f \left(x\right) = \frac{1}{x - 10} + \frac{1}{x - 20}$ let's find first limits.
Actually they are pretty obvious :
$L i m \left(x \to \pm \infty\right) f \left(x\right) = L i m \left(x \to \pm \infty\right) \frac{1}{x - 10} + \frac{1}{x - 20} = 0 + 0 = 0$ (when you divide a rational number by an infinite, the result is near to 0)
Now let's study limits in 10 and in 20.
$L i m \left(x \implies {10}^{-}\right) = \frac{1}{{0}^{-}} - \frac{1}{10} = - \infty$
$L i m \left(x \implies {20}^{-}\right) = \frac{1}{{0}^{-}} + \frac{1}{10} = - \infty$
$L i m \left(x \implies {10}^{+}\right) = \frac{1}{{0}^{+}} - \frac{1}{10} = + \infty$
$L i m \left(x \implies {20}^{-}\right) = \frac{1}{{0}^{+}} + \frac{1}{10} = + \infty$
|
2021-12-01 15:31:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9069004058837891, "perplexity": 11012.072104624467}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360803.6/warc/CC-MAIN-20211201143545-20211201173545-00255.warc.gz"}
|
https://www.semanticscholar.org/paper/Cantor-Bernstein-implies-Excluded-Middle-Pradic-Brown/98986ca00ecfe5cd0bc30e3348a9091542389b72
|
• Corpus ID: 125943205
# Cantor-Bernstein implies Excluded Middle
@article{Pradic2019CantorBernsteinIE,
title={Cantor-Bernstein implies Excluded Middle},
journal={ArXiv},
year={2019},
volume={abs/1904.09193}
}
• Published 19 April 2019
• Mathematics
• ArXiv
We prove in constructive logic that the statement of the Cantor-Bernstein theorem implies excluded middle. This establishes that the Cantor-Bernstein theorem can only be proven assuming the full power of classical logic. The key ingredient is a theorem of Mart\'in Escard\'o stating that quantification over a particular subset of the Cantor space $2^{\mathbb{N}}$, the so-called one-point compactification of $\mathbb{N}$, preserves decidable predicates.
7 Citations
The Cantor–Schröder–Bernstein Theorem for $$\infty$$-groupoids
We show that the Cantor-Schroder-Bernstein Theorem for homotopy types, or $\infty$-groupoids holds in the following form: For any two types, if each one is embedded into the other, then they are
The Cantor–Schröder–Bernstein Theorem for -groupoids
We show that the Cantor–Schröder–Bernstein Theorem for homotopy types, or ∞groupoids, holds in the following form: For any two types, if each one is embedded into the other, then they are equivalent.
Computational Back-And-Forth Arguments in Constructive Type Theory
The back-and-forth method is a well-known technique to establish isomorphisms of countable structures. In this proof pearl, we formalise this method abstractly in the framework of constructive type
Infinite Omniscient Sets in Constructive Mathematics
• Mathematics, Computer Science
• 2020
It is proved that N∞ the subset of all descending sequences in 2N, satisfies this omniscience principle and then it is shown how this can be generalised to many other subsets of 2N.
The generalised continuum hypothesis implies the axiom of choice in Coq
• Economics
CPP
• 2021
Two Coq mechanisations of Sierpinski's result that the generalised continuum hypothesis (GCH) implies the axiom of choice (AC) are discussed and compared, concerning type-theoretic formulations of GCH and AC.
Countable sets versus sets that are countable in reverse mathematics
The program Reverse Mathematics (RM for short) seeks to identify the axioms necessary to prove theorems of ordinary mathematics, usually working in the language of second-order arithmetic L 2 . A
Division by Two, in Homotopy Type Theory
• Mathematics
FSCD
• 2022
Natural numbers are isomorphism classes of finite sets and one can look for operations on sets which, after quotienting, allow recovering traditional arithmetic operations. Moreover, from a
## References
SHOWING 1-8 OF 8 REFERENCES
The Cantor–Schröder–Bernstein Theorem for -groupoids
We show that the Cantor–Schröder–Bernstein Theorem for homotopy types, or ∞groupoids, holds in the following form: For any two types, if each one is embedded into the other, then they are equivalent.
THOUGHTS ON THE CANTOR-BERNSTEIN THEOREM
• Mathematics
• 1986
The usual proofs of the well-known set-theoretical theorem “Given one-one maps f: A → B and g:B → A, there exists a one-one onto map h:A → B” actually produce a map h:A → B contained in the relation
Infinite sets that Satisfy the Principle of Omniscience in any Variety of Constructive Mathematics
• M. Escardó
• Mathematics
The Journal of Symbolic Logic
• 2013
Abstract We show that there are plenty of infinite sets that satisfy the omniscience principle, in a minimalistic setting for constructive mathematics that is compatible with classical mathematics. A
Set Theory
• F. Stephan
• Computer Science
|
2022-08-19 21:38:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9319392442703247, "perplexity": 1434.2811023163902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573760.75/warc/CC-MAIN-20220819191655-20220819221655-00384.warc.gz"}
|
http://coolprop.sourceforge.net/fluid_properties/fluids/Ethanol.html
|
# Ethanol¶
## References¶
### Equation of State¶
J. A. Schroeder, S. G. Penoncello, and J. S. Schroeder. A Fundamental Equation of State for Ethanol. J. Phys. Chem. Ref. Data, 43(4):043102, 2014. doi:10.1063/1.4895394.
### Thermal Conductivity¶
M. J. Assael, E. A. Sykioti, M. L. Huber, and R. A. Perkins. Reference Correlation of the Thermal Conductivity of Ethanol from the Triple Point to 600 K and up to 245 MPa. J. Phys. Chem. Ref. Data, 42(2):023102–1:10, 2013. doi:10.1063/1.4797368.
### Viscosity¶
S. B. Kiselev, J. F. Ely, I. M. Abdulagatov, and M. L. Huber. Generalized SAFT-DFT/DMT Model for the Thermodynamic, Interfacial, and Transport Properties of Associating Fluids: Application for n-Alkanols. Ind. Eng. Chem. Res., 44:6916–6927, 2005. doi:10.1021/ie050010e.
### Melting Line¶
T. F. Sun, J. A. Schouten, N. J. Trappeniers, and S. N. Biswas. Accurate Measurement of the Melting Line of Methanol and Ethanol at Pressures up to 270 MPa. Ber. Bunsenges. Phys. Chem., 92:652–655, 1988. doi:10.1002/bbpc.198800153.
### Surface Tension¶
A. Mulero, I. Cachadiña, and M. I. Parra. Recommended Correlations for the Surface Tension of Common Fluids. J. Phys. Chem. Ref. Data, 41(4):043105–1:13, 2012. doi:10.1063/1.4768782.
## Aliases¶
C2H6O, ethanol, ETHANOL
## Fluid Information¶
Parameter, Value
General
Molar mass [kg/mol] 0.04606844
CAS number 64-17-5
ASHRAE class UNKNOWN
Formula $$C_{2}H_{6}O$$
Acentric factor 0.644
InChI InChI=1/C2H6O/c1-2-3/h3H,2H2,1H3
InChIKey LFQSCWFLJHTTHZ-UHFFFAOYAB
SMILES CCO
ChemSpider ID 682
2D image
Limits
Maximum temperature [K] 650.0
Maximum pressure [Pa] 280000000.0
Triple point
Triple point temperature [K] 159.1
Triple point pressure [Pa] 0.000735392822517
Critical point
Critical point temperature [K] 514.71
Critical point density [kg/m3] 273.1858492
Critical point density [mol/m3] 5930.0
Critical point pressure [Pa] 6268000.0
## REFPROP Validation Data¶
Note
This figure compares the results generated from CoolProp and those generated from REFPROP. They are all results obtained in the form $$Y(T,\rho)$$, where $$Y$$ is the parameter of interest and which for all EOS is a direct evaluation of the EOS
You can download the script that generated the following figure here: (link to script), right-click the link and then save as... or the equivalent in your browser. You can also download this figure as a PDF.
## Consistency Plots¶
The following figure shows all the flash routines that are available for this fluid. A red + is a failure of the flash routine, a black dot is a success. Hopefully you will only see black dots. The red curve is the maximum temperature curve, and the blue curve is the melting line if one is available for the fluid.
In this figure, we start off with a state point given by T,P and then we calculate each of the other possible output pairs in turn, and then try to re-calculate T,P from the new input pair. If we don’t arrive back at the original T,P values, there is a problem in the flash routine in CoolProp. For more information on how these figures were generated, see CoolProp.Plots.ConsistencyPlots
Note
You can download the script that generated the following figure here: (link to script), right-click the link and then save as... or the equivalent in your browser. You can also download this figure as a PDF.
|
2018-01-22 00:16:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4798009693622589, "perplexity": 5035.041486870909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890928.82/warc/CC-MAIN-20180121234728-20180122014728-00090.warc.gz"}
|
https://stats.stackexchange.com/questions/254124/why-does-logistic-regression-become-unstable-when-classes-are-well-separated/254271
|
# Why does logistic regression become unstable when classes are well-separated?
Why is it that logistic regression becomes unstable when classes are well-separated? What does well-separated classes mean? I would really appreciate if someone can explain with an example.
It isn't correct that logistic regression in itself becomes unstable when there are separation. Separation means that there are some variables which are very good predictors, which is good, or, separation may be an artifact of too few observations/too many variables. If that is the case, the solution might be to get more data. But separation itself, then, is only a symptom, and not a problem in itself.
So there are really different cases to be treated. First, what is the goal of the analysis? If the final result of the analysis is some classification of cases, separation is no problem at all, it really means that there are very good variables giving very good classification. But if the goal is risk estimation, we need the parameter estimates, and with separation the usual mle (maximum likelihood) estimates do not exist. So we must change estimation method, maybe. There are several proposals in the literature, I will come back to that.
Then there are (as said above) two different possible causes for separation. There might be separation in the full population, or separation might be caused by to few observed cases/too many variables.
What breaks down with separation, is the maximum likelihood estimation procedure. The mle parameter estimates (or at least some of them) becomes infinite. I said in the first version of this answer that that can be solved easily, maybe with bootstrapping, but that does not work, since there will be separation in each bootstrap resample, at least with the usual cases bootstrapping procedure. But logistic regression is still a valid model, but we need some other estimation procedure. Some proposals have been:
1. regularization, like ridge or lasso, maybe combined with bootstrap.
2. exact conditional logistic regression
3. permutation tests, see https://www.ncbi.nlm.nih.gov/pubmed/15515134
4. Firths bias-reduced estimation procedure, see Logistic regression model does not converge
5. surely others ...
If you use R, there is a package on CRAN, SafeBinaryRegression, which help with diagnosing problems with separation, using mathematical optimization methods to check for sure if there is separation or quasiseparation! In the following I will be giving a simulated example using this package, and the elrm package for approximate conditional logistic regression.
First, a simple example with the safeBinaryRegression package. This package just redefines the glm function, overloading it with a test of separation, using linear programming methods. If it detects separation, it exits with an error condition, declaring that the mle does not exist. Otherwise it just runs the ordinary glm function from stats. The example is from its help pages:
library(safeBinaryRegression) # Some testing of that package,
# based on its examples
# complete separation:
x <- c(-2, -1, 1, 2)
y <- c(0, 0, 1, 1)
glm(y ~ x, family=binomial)
glm(y ~ x, family=binomial, separation="test")
stats::glm(y~ x, family=binomial)
# Quasicomplete separation:
x <- c(-2, 0, 0, 2)
y <- c(0, 0, 1, 1)
glm(y ~ x, family=binomial)
glm(y ~ x, family=binomial, separation="test")
stats::glm(y~ x, family=binomial)
The output from running it:
> # complete separation:
> x <- c(-2, -1, 1, 2)
> y <- c(0, 0, 1, 1)
> glm(y ~ x, family=binomial)
Error in glm(y ~ x, family = binomial) :
The following terms are causing separation among the sample points: (Intercept), x
> glm(y ~ x, family=binomial, separation="test")
Error in glm(y ~ x, family = binomial, separation = "test") :
Separation exists among the sample points.
This model cannot be fit by maximum likelihood.
> stats::glm(y~ x, family=binomial)
Call: stats::glm(formula = y ~ x, family = binomial)
Coefficients:
(Intercept) x
-9.031e-08 2.314e+01
Degrees of Freedom: 3 Total (i.e. Null); 2 Residual
Null Deviance: 5.545
Residual Deviance: 3.567e-10 AIC: 4
Warning message:
glm.fit: fitted probabilities numerically 0 or 1 occurred
> # Quasicomplete separation:
> x <- c(-2, 0, 0, 2)
> y <- c(0, 0, 1, 1)
> glm(y ~ x, family=binomial)
Error in glm(y ~ x, family = binomial) :
The following terms are causing separation among the sample points: x
> glm(y ~ x, family=binomial, separation="test")
Error in glm(y ~ x, family = binomial, separation = "test") :
Separation exists among the sample points.
This model cannot be fit by maximum likelihood.
> stats::glm(y~ x, family=binomial)
Call: stats::glm(formula = y ~ x, family = binomial)
Coefficients:
(Intercept) x
5.009e-17 9.783e+00
Degrees of Freedom: 3 Total (i.e. Null); 2 Residual
Null Deviance: 5.545
Residual Deviance: 2.773 AIC: 6.773
Now we simulate from a model which can be closely approximated by a logistic model, except that above a certain cutoff the event probability is exactly 1.0. Think about a bioassay problem, but above the cutoff the poison always kills:
pl <- function(a, b, x) 1/(1+exp(-a-b*x))
a <- 0
b <- 1.5
x_cutoff <- uniroot(function(x) pl(0,1.5,x)-0.98,lower=1,upper=3.5)$root ### circa 2.6 pltrue <- function(a, b, x) ifelse(x < x_cutoff, pl(a, b, x), 1.0) x <- -3:3 ### Let us simulate many times from this model, and try to estimate it ### with safeBinaryRegression::glm That way we can estimate the probability ### of separation from this model set.seed(31415926) ### May I have a large container of coffee replications <- 1000 p <- pltrue(a, b, x) err <- 0 good <- 0 for (i in 1:replications) { y <- rbinom(length(x), 1, p) res <- try(glm(y~x, family=binomial), silent=TRUE) if (inherits(res,"try-error")) err <- err+1 else good <- good+1 } P_separation <- err/replications P_separation When running this code, we estimate the probability of separation as 0.759. Run the code yourself, it is fast! Then we extend this code to try different estimations procedures, mle and approximate conditional logistic regression from elrm. Running this simulation take around 40 minutes on my computer. library(elrm) # from CRAN set.seed(31415926) ### May I have a large container of coffee replications <- 1000 GOOD <- numeric(length=replications) ### will be set to one when MLE exists! COEFS <- matrix(as.numeric(NA), replications, 2) COEFS.elrm <- matrix(as.numeric(NA), replications, 2) # But we'll only use second col for x p <- pltrue(a, b, x) err <- 0 good <- 0 for (i in 1:replications) { y <- rbinom(length(x), 1, p) res <- try(glm(y~x, family=binomial), silent=TRUE) if (inherits(res,"try-error")) err <- err+1 else{ good <- good+1 GOOD[i] <- 1 } # Using stats::glm mod <- stats::glm(y~x, family=binomial) COEFS[i, ] <- coef(mod) # Using elrm: DATASET <- data.frame(x=x, y=y, n=1) mod.elrm <- elrm(y/n ~ x, interest= ~ x -1, r=4, iter=10000, burnIn=1000, dataset=DATASET) COEFS.elrm[i, 2 ] <- mod.erlm$coeffs
}
### Now we can compare coefficient estimates of x,
### when there are separation, and when not:
non <- which(GOOD==1)
cof.mle.non <- COEFS[non, 2, drop=TRUE]
cof.mle.sep <- COEFS[-non, 2, drop=TRUE]
cof.elrm.non <- COEFS.elrm[non, 2, drop=TRUE]
cof.elrm.sep <- COEFS.elrm[-non, 2, drop=TRUE]
Now we want to plot the results, but before that, note that ALL the conditional estimates are equal! That is really strange and should need an explanation ... The common value is 0.9523975. But at least we obtained finite estimates, with confidence intervals which contains the true value (not shown here). So I will only show a histogram of the mle estimates in the cases without separation:
hist(cof.mle.non, prob=TRUE)
[
What is remarkable is that all the estimates is lesser than the true value 1.5. That can have to do with the fact that we simulated from a modified model, needs investigation.
• – kjetil b halvorsen Jan 4 '17 at 16:42
• (+1) But it's rather strong to say we need an estimation procedure other than maximum likelihood. Infinite odds & odds ratios can be sensible point estimates; often enough the problem caused by separation is just getting good interval estimates. – Scortchi May 30 '17 at 10:54
• @kjetilbhalvorsen Apologies to revive an old thread, but I was wondering if you are aware of a similar package in python? – Meep Jul 23 at 20:48
• Sorry, but I do not know about python. But it should be possible to run R from within python. – kjetil b halvorsen Jul 24 at 21:52
There are good answers here from @sean501 and @kjetilbhalvorsen. You asked for an example. Consider the figure below. You might come across some situation in which the data generating process is like that depicted in panel A. If so, it is quite possible that the data you actually gather look like those in panel B. Now, when you use data to build a statistical model, the idea is to recover the true data generating process or at least come up with an approximation that is reasonably close. Thus, the question is, will fitting a logistic regression to the data in B yield a model that approximates the blue line in A? If you look at panel C, you can see that the gray line better approximates the data than the true function does, so in seeking the best fit, the logistic regression will 'prefer' to return the gray line rather than the blue one. It doesn't stop there, however. Looking at panel D, the black line approximates the data better than the gray one—in fact, it is the best fit that could possibly occur. So that is the line the logistic regression model is pursuing. It corresponds to an intercept of negative infinity and a slope of infinity. That is, of course, very far from the truth that you are hoping to recover. Complete separation can also cause problems with the calculation of the p-values for your variables that come standard with logistic regression output (the explanation there is slightly different and more complicated). Moreover, trying to combine the fit here with other attempts, for example with a meta-analysis, will just make the other findings less accurate.
• (+1) This is a very helpful illustration of the problem. – mkt Jul 26 '17 at 15:46
• one interesting aspect your diagram shows is that you ideally want the sample to come from the "x space" that leads to 50-50 probabilities (e.g. points in the range 12<x<15). in fact i think you would probably want to collect more data from this middle region (10<x<17) in a real life scenario that provided this result. – probabilityislogic Jun 30 at 12:12
• @probabilityislogic, that's right. Most of the information about the relationship is in the data from the middle region. – gung Jun 30 at 13:08
It means that there is a hyperplane such that on one side there are all the positive points and on the other side all the negative. The maximum likelihood solution is then flat 1 on one side and flat 0 on other side, which is 'achieved' with the logistic function by having the coefficients at infinity.
What you are calling "separation" (not 'seperation') covers two different situations that end up causing the same issue – which I would not call, however, an issue of "instability" as you do.
## An illustration: Survival on the Titanic
• Let $DV \in (0, 1)$ be a binary dependent variable, and $SV$ a separating, independent variable.
Let's suppose that $SV$ is the class of the passengers on the Titanic, and that $DV$ indicates whether they survived the wreckage, with $0$ indicating death and $1$ indicating survival.
• Complete separation is the situation where $SV$ predicts all values of $DV$.
That would be the case if all first-class passengers on the Titanic had survived the wreckage, and none of the second-class passengers had survived.
• Quasi-complete separation is the situation where $SV$ predicts either all cases where $DV = 0$, or all cases where $DV = 1$, but not both.
That would be the case if some first-class passengers on the Titanic had survived the wreckage, and none of the second-class passengers had survived. In that case, passenger class $SV$ predicts all cases where $DV = 1$, but not all cases where $DV = 0$.
Reversely, if only some second-class passengers on the Titanic had died in the wreckage, then passenger class $SV$ predicts all cases where $DV = 0$, but not all cases where $DV = 1$, which includes both first-class and second-class passengers.
What you are calling "well-separated classes" is the situation where a binary outcome variable $DV$ (e.g. survival on the Titanic) can be completely or quasi-completely mapped to a predictor $SV$ (e.g. passenger class membership; $SV$ need not be binary as it is in my example).
## Why is logistic regression "unstable" in these cases?
This is well explained in Rainey 2016 and Zorn 2005.
• Under complete separation, your logistic model is going to look for a logistic curve that assigns, for example, all probabilities of $DV$ to $1$ when $SV = 1$, and all probabilities to $DV$ to $0$ when $SV = 0$.
This corresponds to the aforementioned situation where only and all first-class passengers of the Titanic survive, with $SV = 1$ indicating first-class passenger membership.
This is problematic because the logistic curve lies strictly between $0$ and $1$, which means that, to model the observed data, the maximisation is going to push some of its terms towards infinity, in order, if you like, to make $SV$ "infinitely" predictive of $DV$.
• The same problem arises under quasi-complete separation, as the logistic curve will still need to assign only values of either $0$ or $1$ to $DV$ in one of two cases, $SV = 0$ or $SV = 1$.
In both cases, the likelihood function of your model will be unable to find a maximum likelihood estimate: it will only find an approximation of that value by approaching it asymptotically.
What you are calling "instability" is the fact that, in cases of complete or quasi-complete separation, there is no finite likelihood for the logistic model to reach. I would not use that term, however: the likelihood function is, in fact, being pretty "stable" (monotonic) in its assignment of coefficient values towards infinity.
Note: my example is fictional. Survival on the Titanic did not boil down just to passenger class membership. See Hall (1986).
|
2019-09-19 15:18:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7866137623786926, "perplexity": 1244.6254823750676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573533.49/warc/CC-MAIN-20190919142838-20190919164838-00379.warc.gz"}
|
https://uname.pingveno.net/blog/index.php/post/2015/08/29/Postfix-%3A-configure-postmaster%2C-hostmaster%2C-and-abuse-catchall-for-RFC-compliance
|
This short howto will show you how to set up a catchall for common required email addresses. Some mail servers are testing if mail is accepted on this addresses to detect spammymail servers. Hostmaster address can also be used for domain Trading, to check the ownership of the domain.
1. Create a file named /etc/postfix/regexp-catchall.cf with the following content:
# Catchall to comply with RFC standards
/^abuse@/ youshouldreadit@mydomain.com
virtual_alias_maps = proxy:mysql:/etc/postfix/mysql-virtual_forwardings.cf, regexp:/etc/postfix/regexp-catchall.cf
|
2018-01-19 11:15:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17087285220623016, "perplexity": 7079.337867114672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887973.50/warc/CC-MAIN-20180119105358-20180119125358-00341.warc.gz"}
|
http://www.kohout-a-slepice.cz/lcl3xfar/sequence-and-series-examples-fdc751
|
Definition of Series The addition of the terms of a sequence (a n), is known as series. More precisely, an infinite sequence (,,, â¦) defines a series S that is denoted = + + + ⯠= â = â. Here are a few examples of sequences. The sequence on the given example can be written as 1, 4, 9, 16, ⦠⦠â¦, ð2, ⦠⦠Each number in the range of a sequence is a term of the sequence, with ð ð the nth term or general term of the sequence. Scroll down the page for examples and solutions on how to use the formulas. A series has the following form. ⦠If we have a sequence 1, 4, 7, 10, ⦠Then the series of this sequence is 1 + 4 + 7 + 10 +⦠The Greek symbol sigma âΣâ is used for the series which means âsum upâ. A Sequence is a set of things (usually numbers) that are in order.. Each number in the sequence is called a term (or sometimes "element" or "member"), read Sequences and Series for more details.. Arithmetic Sequence. geometric series. If the sequence of partial sums is a convergent sequence (i.e. If we have a sequence 1, 4, 7, 10, ⦠Then the series of this sequence is 1 + 4 + 7 + 10 +⦠Notation of Series. Now, just as easily as it is to find an arithmetic sequence/series in real life, you can find a geometric sequence/series. In 2013, the number of students in a small school is 284. 5. Thus, the first term corresponds to n = 1, the second to n = 2, and so on. Example 1.1.1 Emily ï¬ips a quarter ï¬ve times, the sequence of coin tosses is HTTHT where H stands for âheadsâ and T stands for âtailsâ. Let's say this continues for the next 31 days. The common feature of these sequences is that the terms of each sequence âaccumulateâ at only one point. It is estimated that the student population will increase by 4% each year. He knew that the emperor loved chess. When the craftsman presented his chessboard at court, the emperor was so impressed by the chessboard, that he said to the craftsman "Name your reward" The craftsman responded "Your Highness, I don't want money for this. In an Arithmetic Sequence the difference between one term and the next is a constant.. [Image will be uploaded soon] You may have heard the term in⦠Arithmetic Series We can use what we know of arithmetic sequences to understand arithmetic series. The distinction between a progression and a series is that a progression is a sequence, whereas a series is a sum. The arithmetical and geometric sequences that follow a certain rule, triangular number sequences built on a pattern, the famous Fibonacci sequence based on recursive formula, sequences of square or cube numbers etc. We can use this back in our formula for the arithmetic series. Letâs start with one ancient story. An infinite arithmetic series is the sum of an infinite (never ending) sequence of numbers with a common difference. This will allow you to retell the story in the order in which it occurred. The formula for the nth term generates the terms of a sequence by repeated substitution of counting numbers for ð. In mathematics, a series is the sum of the terms of an infinite sequence of numbers. An arithmetic series also has a series of common differences, for example 1 + 2 + 3. Generally, it is written as S n. Example. Continuing on, everyday he gets what is in his bank account. Its as simple as thinking of a family reproducing and keeping the family name around. Series are similar to sequences, except they add terms instead of listing them as separate elements. The fast-solving method is the most important feature Sequence and Series Class 11 NCERT Solutions comprise of. Read on to examine sequence of events examples! Fibonacci Sequence Formula. The Meg Ryan series is a speci c example of a geometric series. arithmetic series word problems with answers Question 1 : A man repays a loan of 65,000 by paying 400 in the first month and then increasing the payment by 300 every month. 16+12+8 +4+1 = 41 16 + 12 + 8 + 4 + 1 = 41 yields the same sum. In Generalwe write a Geometric Sequence like this: {a, ar, ar2, ar3, ... } where: 1. ais the first term, and 2. r is the factor between the terms (called the "common ratio") But be careful, rshould not be 0: 1. Basic properties. For instance, " 1, 2, 3, 4 " is a sequence, with terms " 1 ", " 2 ", " 3 ", and " 4 "; the corresponding series is the sum " 1 + 2 + 3 + 4 ", and the value of the series is 10 . Letâs look at some examples of sequences. For example, the sequence {2, 5, 8, 11} is an arithmetic sequence, because each term can be found by adding three to the term before it. For example, the next day he will receive $0.01 which leaves a total of$0.02 in his account. So now we have So we now know that there are 136 seats on the 30th row. n = number of terms. Definition and Basic Examples of Arithmetic Sequence. So he conspires a plan to trick the emperor to give him a large amount of fortune. Sequences and Series are basically just numbers or expressions in a row that make up some sort of a pattern; for example,January,February,March,â¦,December is a sequence that represents the months of a year. An arithmetic sequence is one in which there is a common difference between consecutive terms. Sequences and Series â Project 1) Real Life Series (Introduction): Example 1 - Jonathan deposits one penny in his bank account. The following diagrams give two formulas to find the Arithmetic Series. D. DeTurck Math 104 002 2018A: Sequence and series 14/54 In a Geometric Sequence each term is found by multiplying the previous term by a constant. Infinite Sequences and Series This section is intended for all students who study calculus and considers about $$70$$ typical problems on infinite sequences and series, fully solved step-by-step. Introduction to Series . Solution: Remember that we are assuming the index n starts at 1. Though the elements of the sequence (â 1) n n \frac{(-1)^n}{n} n (â 1) n oscillate, they âeventually approachâ the single point 0. The terms are then . You would get a sequence that looks something like - 1, 2, 4, 8, 16 and so on. If you're seeing this message, it means we're ⦠The n th partial sum S n is the sum of the first n terms of the sequence; that is, = â =. Each page includes appropriate definitions and formulas followed by ⦠Example 7: Solving Application Problems with Geometric Sequences. Sequence and Series Class 11 NCERT solutions are presented in a concise structure so that students get the relevance once they are done with each section. Then the following formula can be used for arithmetic sequences in general: Geometric series are used throughout mathematics, and they have important applications in physics, engineering, biology, economics, computer science, queueing theory, and finance. Here, the sequence is defined using two different parts, such as kick-off and recursive relation. Can you find their patterns and calculate the next ⦠As a side remark, we might notice that there are 25= 32 diï¬erent possible sequences of ï¬ve coin tosses. Arithmetic Sequences and Sums Sequence. Practice Problem: Write the first five terms in the sequence . Series like the harmonic series, alternating series, Fourier series etc. When r=0, we get the sequence {a,0,0,...} which is not geometric Generally it is written as S n. Example. Geometric number series is generalized in the formula: x n = x 1 × r n-1. Meaning of Series. Identifying the sequence of events in a story means you can pinpoint its beginning, its middle, and its end. its limit exists and is finite) then the series is also called convergent and in this case if lim nââsn = s lim n â â s n = s then, â â i=1ai = s â i = 1 â a i = s. The individual elements in a sequence are called terms. The Fibonacci sequence of numbers âF n â is defined using the recursive relation with the seed values F 0 =0 and F 1 =1:. Term corresponds to n = 2, and so on as with his mind of $0.02 his... Arithmetic sequences to understand arithmetic series is the most important feature sequence and series of common differences for... Of series the addition of the sequence is called series con man who chessboards! A verbal description of a sequence, whereas a series is that the student will. 1 2 is, the second to n = 1, 2, 4, 8, and! Of counting numbers for ð between one term and the next day will! Summation that sums the terms of each sequence âaccumulateâ at only one point add terms of. Gets what is in his account estimated that the terms of a number models relationship! Particular, sequences are the basis for series, Fourier series etc gets what is in his account. The Greek symbol âΣâ for the nth term generates the terms of an infinite arithmetic series also has a is. Thus, the next day he will receive$ 0.01 which leaves a total of $0.02 in bank... A particular pattern a sequence ( a n ), is known as series, everyday gets. The sigma notation that is, the first term corresponds to n 2. Between consecutive terms a speci c example of a number receive$ 0.01 which a! Geometric sequence/series 16 + 12 + 8 + 4 + 1 = the first term x... Of students in a sequence ( i.e series never ends: 1 2! With a common difference as it is estimated that the terms of a sequence can be thought of a... = 41 16 + 12 + 8 + 12 + 8 + +... N starts at 1 16 and so on to 0 definition of series the addition of the sequence 1... Numbers ( or other objects ) that usually follow sequence and series examples particular order gets what in! Individual elements in a small school is 284 is a series is the sum of the terms of family. +12+16 = 41 is one in which there is a convergent sequence ( a n ), known..., a series of common differences, for example, the sequence is called.... What we know of arithmetic sequences to understand arithmetic series numbers of the sequence is a.. Students in a geometric sequence each sequence and series examples is found by multiplying the previous term by constant! Of 1 2 and three tails is estimated that the series which means âsum upâ sequence and series Class NCERT... Solutions of Chapter 9 sequences and series of common differences, for example 1 + 4 + 8 + +. The student population will increase by 4 % each year the term in⦠geometric series and keeping family... Common feature of these sequences is that the student population will increase 4. The most important feature sequence and series of common differences, for example 1 + 2 + 3.... Great importance in the order in which it occurred series is that a is! Objects ) that usually follow a particular order parts, such as kick-off and recursive.! Infinite ( never ending ) sequence of numbers ( or other objects ) that usually follow particular. Term, x 1 = the first term corresponds to n = 2, 4, sequence and series examples, 16 so... Of numbers with a common difference between consecutive terms 11 NCERT book available free sequence formula of Chapter 9 and! Can be thought of as a side remark, we might notice that there are 25= 32 possible... 1+4+8 +12+16 = 41 16 + 12 + 16 = 41 16 + 12 + sequence and series examples + +! A con man who made chessboards for the series which means âsum upâ geometric series sequences is that progression! Mathematical sequences and series Class 11 NCERT solutions comprise of on how to use the sigma notation is. The index n starts at 1 name around large amount of fortune page includes appropriate definitions and formulas followed â¦. 11 NCERT book available free Fibonacci sequence formula con man who made chessboards for the day. Basis for series, Fourier series etc reproducing and keeping the family name around series that arise out various!, x 1 = the first term, r =common ratio, and series of Class NCERT. 2 + 3 ⦠1 + 2 + 3 ⦠comprise of other objects ) usually. Sigma notation that is, the first term, x 1 = the first five in. Constant times ) the successive powers of 1 2 ( i.e 1, 2, 4,,! 30Th row of numbers ( or other objects ) that usually follow a order! With a particular pattern can find a geometric sequence/series sum of the terms each... Corresponds to n = 1, the sequence that looks something like -,! Might notice that there are 25= 32 diï¬erent possible sequences of ï¬ve tosses! Series and you would get a sequence can be thought of as a of... Increase by 4 % each year series Class 11 NCERT book available free terms instead of listing them separate! Definition of series the addition of the sequence and series examples something like - 1 the... The sum of the sequence is called series sequences to understand arithmetic series also a! Summation of all the numbers of the terms of an infinite sequence of numbers a!, 10 have two heads and three tails that a progression and a series is a convergent sequence a. So he sequence and series examples a plan to trick the emperor to give him a amount! Is one series and of various formulas the next 31 days infinite sequence of with... Will increase by 4 % each year by ⦠Fibonacci sequence formula thought of as a remark..., determine the sequence that looks something like - 1, 2, and basis! The 30th row term, x 1 = the first term, x 1 41. ÂSum upâ ( possibly a constant a sequence can be thought of as a side remark, we might that. Day he will receive $0.01 which leaves a total of$ 0.02 in his bank account 8, and... In his bank account simple as thinking of a family reproducing and keeping the name. To use the sigma notation that is, the next day he will receive $0.01 leaves! Different parts, such as kick-off and recursive relation formulas sequence and series examples find an arithmetic sequence/series in real life, can. 1+4+8 +12+16 = 41 16 + 12 + 8 + 4 + 8 + 4 + 1 = the term! A geometric sequence each term is found by multiplying the previous term by a constant a sequence that looks like! Just as easily as it is written as S n. example sums is a chain of numbers with a order! Would get a sequence can be thought of as a side remark, we notice! He conspires a plan to trick the emperor to give him a large amount of fortune is! 16 + 12 + 16 = 41 is one series and use this back our. Find the arithmetic series n ), is known as series seats on the row... The sequence each sequence âaccumulateâ at only one point work as well as with his mind differential and! And recursive relation by repeated substitution of counting numbers for ð successive powers of a sequence is one which! He will receive$ 0.01 which leaves a total of \$ 0.02 in his.. That is, the Greek symbol âΣâ for the nth term generates the terms of arithmetic... Instead of listing them as separate elements: Write the first term corresponds to n = n term. Is one series and 1 + 4 + 1 = the first five terms in the order in which is. ÂΣ for the series which means âsum upâ population will increase by 4 % each year of numbers ( other! The formula for the next 31 days most important feature sequence and series that arise out various. Multiplying the previous term by a constant and solutions on how to use sequence and series examples sigma that. Of these numbers or expressions are called terms or elementsof the sequence students in a school... Which are important in differential equations and analysis where ; x n = n term... Good at his work as well as with his mind most important feature sequence and series Class 11 NCERT comprise. This back in our formula for the series which means âsum upâ NCERT book available.. Sequence are called terms or elementsof the sequence is defined using two different parts, as! Series like the harmonic series, which are important in differential equations and analysis counting. Sequence, whereas a series or summation that sums the terms of each sequence at... This continues for the emperor 's say this continues for the nth term generates the terms of an arithmetic... Infinite sequence of partial sums is a speci c example of a real-world relationship, the... Meg Ryan series has terms that are ( possibly a constant reproducing and keeping the name... Sequence each term is found by multiplying the previous term by a constant 1 + 4 8! Parts, such as kick-off and recursive relation as series generally, it is written as S n. example that... Two heads and three tails as with his mind have so we now know that are... As S n. example parts, such as kick-off and recursive relation give formulas..., you can find a geometric sequence each term is found by multiplying the term... Would get a sequence are called terms nth term generates the terms of a geometric series has terms that (! 41 is one in which it occurred are ( possibly a constant times ) the successive of! Sequence, whereas a series or summation that sums the terms of a family reproducing and the...
|
2021-03-04 09:13:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.806519091129303, "perplexity": 587.8032948175927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368687.25/warc/CC-MAIN-20210304082345-20210304112345-00438.warc.gz"}
|
https://diego.assencio.com/
|
# Recent posts
## The nature of the "this" pointer in C++
Posted by Diego Assencio on 2017.04.01 under Programming (C/C++)
Whenever you call a non-static member function of a class, you call it through an existing object of that class type. Inside the definition of such a member function, you can refer to this object through the this pointer. Unless there is a need to disambiguate the use of a certain variable name (for instance, if a class data member has the same name as a local variable of the member function), the this pointer is often not used by developers to explicitly refer to class data members. This is almost always not a problem, but as I will discuss in this post, there are situations which require special care in order to avoid certain pitfalls.
To start, consider the following piece of code:
class AddNumber
{
public:
...
private:
int number;
};
{
return number + other;
}
When the compiler parses the code above, it will understand that on the definition of AddNumber::add, number refers to the class data member with that name, i.e., that the code above is equivalent to this:
class AddNumber
{
public:
...
private:
int number;
};
{
return this->number + other;
}
However, if we change the name of the parameter other of AddNumber::add to number, the compiler will interpret any occurrence of number inside its definition as the function parameter number instead of the data member this->number:
class AddNumber
{
public:
...
private:
int number;
};
{
return number + number; /* here number is not this->number! */
}
To fix this ambiguity, we can use the this pointer to indicate to the compiler that the first occurrence of number actually refers to the class data member instead of the function parameter:
class AddNumber
{
public:
...
private:
int number;
};
{
return this->number + number; /* this is what we originally had */
}
I hope there was nothing new for you on everything discussed so far, so let's move on to more interesting things.
One could argue that classes as we see them don't really exist: they are purely syntactic sugar for avoiding having to explicitly pass object pointers around as we do in C programs. To clarify this idea, take a look at the code below: it is conceptually equivalent to the one above except for the absence of the private access specifier. To prevent any desperation in advance, the code below is not valid C++; its purpose is merely to illustrate the concepts we are about to discuss:
struct AddNumber
{
...
int number;
};
{
return this->number + number;
}
Why is the code above not valid? Well, for two reasons: AddNumber::add is not a valid function name in this context (it is not a member of AddNumber), and this, being a reserved keyword, cannot be used as a parameter name. While in the original version, AddNumber:add is called through an existing object of type AddNumber:
AddNumber my_adder;
...
in our (invalid) non-class version, AddNumber:add is called with an object as argument:
AddNumber my_adder;
...
Were it not invalid, the non-class version would do exactly the same as the original one. But in any case, it better represents how the compiler actually interprets things. Indeed, it makes it obvious that if we remove the this-> prefix from the first occurrence of number, we will end up with the problem discussed earlier: number will be interpreted exclusively as the function parameter. But don't take my word for it, see it for yourself:
struct AddNumber
{
...
int number;
};
{
return number + number; /* this pointer not used, return 2*number */
}
This brings us to the first lesson of this post: whenever you see a non-static member function, try to always read it as a stand-alone (i.e., non-member) function containing a parameter called this which is a pointer to the object the function is doing its work for.
One question which must be asked at this point is: what about static member functions? Do they also implicitly contain a this pointer? The answer is no, they don't. If they did, they would inevitably be associated with some existing object of the class, but static member functions, like static data members, belong to the class itself and can be invoked directly, i.e., without the need for an an existing class object. In this regard, a static member function is in no way special: the compiler will neither implicitly add a this parameter to its declaration nor introduce this-> prefixes anywhere on its definition.
Static member functions have, however, access to the internals of a class like any other member or friend function, provided it is given a pointer to a class object. This means the following code is valid:
class AddNumber
{
public:
...
private:
int number;
};
{
}
There is one type of situation in which the implicit presence of the this pointer on non-static member functions can cause a lot of headache to the innocent developer. Here it is, in its full "glory":
/* a global array of callable warning objects */
std::vector<std::function<void()>> warnings;
class WarningManager
{
public:
...
private:
std::string name;
};
{
warnings.emplace_back([=]() {
std::cout << name << ": " << message << "\n";
});
}
The purpose of the code above is simple: WarningManager::add_warning populates the global array warnings with lambda functions which print some warning message when invoked. Regardless of how silly the purpose of this code may seem, scenarios like these do happen in practice. And being so, do you see what is the problem here?
If the problem is unclear to you, consider the advice given earlier: read the member function WarningManager::add_warning as a non-member function which takes a pointer called this to a WarningManager object:
/* a global array of callable warning objects */
std::vector<std::function<void()>> warnings;
struct WarningManager
{
...
std::string name;
};
const std::string& message)
{
warnings.emplace_back([=]() {
std::cout << this->name << ": " << message << "\n";
});
}
You may be puzzled with the fact that name on the original version of the code was replaced by this->name on the (remember, invalid) second version. Perhaps you are asking yourself: "isn't name itself actually copied by the capture list on the lambda function"? The answer is no. A "capture all by value" capture list (i.e., [=]) captures all non-static local variables which are visible in the scope where the lambda is created and nothing else. Function parameters fall into this category, but class data members don't. Therefore, the code above is conceptually identical to the following one:
/* a global array of callable warning objects */
std::vector<std::function<void()>> warnings;
struct WarningManager
{
...
std::string name;
};
const std::string& message)
{
warnings.emplace_back([this, message]() {
std::cout << this->name << ": " << message << "\n";
});
}
The problem is now easier to spot: in the original example, the name data member is not being captured directly by value, but is instead accessed through a copy of the this pointer to the WarningManager object for which WarningManager::add_warning is called. Since the lambda may be invoked at a point at which that object may no longer exist, the code above is a recipe for disaster. The lifetime of the lambda is independent from the lifetime of the WarningManager object which creates it, and the implicit replacement of name by this->name on the definition of the lambda means we can find ourselves debugging an obscure program crash.
A simple way to fix the problem just discussed is by being explicit about what we want: we want to capture name by value, so let's go ahead and make that very clear to everyone:
/* a global array of callable warning objects */
std::vector<std::function<void()>> warnings;
class WarningManager
{
public:
...
private:
std::string name;
};
{
const std::string& manager_name = this->name;
warnings.emplace_back([manager_name, message]() {
std::cout << manager_name << ": " << message << "\n";
});
}
Inside the capture list, the string this->name will be copied through its reference manager_name, and the lambda will therefore own a copy of this->name under the name manager_name. In C++14, this code can be simplified using the init capture capability which was added to lambda functions:
/* a global array of callable warning objects */
std::vector<std::function<void()>> warnings;
class WarningManager
{
public:
...
private:
std::string name;
};
{
warnings.emplace_back([manager_name = this->name, message]() {
std::cout << manager_name << ": " << message << "\n";
});
}
In this case, we are explicitly coping this->name into a string called manager_name which is then accessible inside the lambda function. As discussed in a previous post, lambda functions are equivalent to functor classes, and in this case, manager_name is a data member of such a class which is initialized as a copy of this->name.
To close this post, I strongly recommend you read the Zen of Python. Look at the second guiding principle: "Explicit is better than implicit". After reading this post, I hope you can better appreciate what a wise statement that is! :-)
## How are virtual function table pointers initialized?
Posted by Diego Assencio on 2017.03.07 under Programming (C/C++)
A class declaring or inheriting at least one virtual function contains a virtual function table (or vtable, for short). Such a class is said to be a polymorphic class. An object of a polymorphic class type contains a special data member (a "vtable pointer") which points to the vtable of this class. This pointer is an implementation detail and cannot be accessed directly by the programmer (at least not without resorting to some low-level trick). In this post, I will assume the reader is familiar with vtables on at least a basic level (for the uninitiated, here is a good place to learn about this topic).
I hope you learned that when you wish to make use of polymorphism, you need to access objects of derived types through pointers or references to a base type. For example, consider the code below:
#include <iostream>
struct Fruit
{
virtual const char* name() const
{
return "Fruit";
}
};
struct Apple: public Fruit
{
virtual const char* name() const override
{
return "Apple";
}
};
struct Banana: public Fruit
{
virtual const char* name() const override
{
return "Banana";
}
};
void analyze_fruit(const Fruit& f)
{
std::cout << f.name() << "\n";
}
int main()
{
Apple a;
Banana b;
analyze_fruit(a); /* prints "Apple" */
analyze_fruit(b); /* prints "Banana" */
return 0;
}
So far, no surprises here. But what will happen if instead of taking a reference to a Fruit object on analyze_fruit, we take a Fruit object by value?
Any experienced C++ developer will immediately see the word "slicing" written in front of their eyes. Indeed, taking a Fruit object by value means that inside analyze_fruit, the object f is truly a Fruit, and never an Apple, a Banana or any other derived type:
/* same code as before... */
void analyze_fruit(Fruit f)
{
std::cout << f.name() << "\n";
}
int main()
{
Apple a;
Banana b;
analyze_fruit(a); /* prints "Fruit" */
analyze_fruit(b); /* prints "Fruit" */
return 0;
}
This situation is worth analyzing in further detail, even if it seems trivial at first. On the calls to analyze_fruit, we pass objects of type Apple and Banana as arguments which are used to initialize its parameter f (of type Fruit). This is a copy initialization, i.e., the initialization of f in both of these cases is no different from the way f is initialized on the code fragment below:
Apple a;
Fruit f(a);
Even though Fruit does not define a copy constructor, one is provided by the compiler. This default copy constructor merely copies each data member of the source Fruit object into the corresponding data member of the Fruit object being created. In our case, Fruit has no data members, but it still has a vtable pointer. How is this pointer initialized? Is it copied directly from the input Fruit object? Before we answer these questions, let us look at what the compiler-generated copy constructor of Fruit looks like:
struct Fruit
{
/* compiler-generated copy constructor */
Fruit(const Fruit& sf): vptr(/* what goes in here? */)
{
/* nothing happens here */
}
virtual const char* name() const
{
return "Fruit";
}
};
The signature of the Fruit copy constructor shows that is takes a reference to a source Fruit object, which means if we pass an Apple object to the copy constructor of Fruit, the vtable pointer of sf (for "source fruit"), will really point to the vtable of an Apple object. In other words, if this vtable pointer is directly copied into the vtable pointer of the Fruit object being constructed (represented under the name vptr on the code above), this object will behave like an Apple whenever any of its virtual functions are called!
But as we mentioned on the second code example above (the one in which analyze_fruit takes a Fruit object by value), the Fruit parameter f always behaves as a Fruit, and never as an Apple or as a Banana.
This brings us to the main lesson of this post: vtable pointers are not common data members which are directly copied or moved by copy and move constructors respectively. Instead, they are always initialized by any constructor used to build an object of a polymorphic class type T with the address of the vtable for the T class. Also, assignment operators will never touch the values stored by vtable pointers. In the context of our classes, the vtable pointer of a Fruit object will be initialized by any constructor of Fruit with the address of the vtable for the Fruit class and will retain this value throughout the entire lifetime of the object.
## Solving Sudoku puzzles using Linear Programming
Posted by Diego Assencio on 2017.02.28 under Computer Science (Linear Programming)
In this post, I will show how solving a Sudoku puzzle is equivalent to solving an Integer Linear Programming (ILP) problem. This equivalence allows us to solve a Sudoku puzzle using any of the many freely available ILP solvers; an implementation of a solver (in Python 3) which follows the formulation described in this post can be found found here.
A Sudoku puzzle is an $N \times N$ grid divided in blocks of size $m \times n$, i.e., each block contains $m$ rows and $n$ columns, with $N = mn$ since the number of cells in a block is the same as the number of rows/columns on the puzzle. The most commonly known version of Sudoku is a $9 \times 9$ grid (i.e., $N = 9$) with $3 \times 3$ blocks (i.e., $m = n = 3$). Initially, a cell can be either empty or contain an integer value in the interval $[1,N]$; non-empty cells are fixed and cannot be modified as the puzzle is solved. The rules for solving the puzzle are:
1 each integer value $k \in [1,N]$ must appear exactly once in each row 2 each integer value $k \in [1,N]$ must appear exactly once in each column 3 each integer value $k \in [1,N]$ must appear exactly once in each block.
Each rule above individually implies that every cell of the puzzle will have a number assigned to it when the puzzle is solved, and that each row/column/block of a solved the puzzle will represent some permutation of the sequence $\{1, 2, \ldots, N\}$.
The version of Sudoku outlined by these rules is the standard one. There are many variants of Sudoku, but I hope that after reading this post, you will realize that the these variants can also be expressed as ILP problems using the same ideas presented here.
The rules above can be directly expressed as constraints of an ILP problem. Our formulation will be such that the constraints will enforce everything needed to determine a valid solution to the puzzle, and the objective function will therefore be irrelevant since any point which satisfies the constraints will represent a solution to the problem (notice, however, that some Sudoku puzzles may contain more than one solution, but I am assuming the reader will be content with finding a single one of those). Therefore, our objective function will be simply ${\bf 0}^T{\bf x} = 0$, where ${\bf 0}$ is a vector with all elements set to $0$ and ${\bf x}$ is a vector representing all variables used in the ILP formulation below.
The puzzle grid can be represented as an $N \times N$ matrix, and each grid cell can be naturally assigned a pair of indices $(i,j)$, with $i$ and $j$ representing the cell row and column respectively (see figure 1). The top-left grid cell has $(i,j) = (1,1)$, and the bottom-right one has $(i,j) = (N,N)$, with $i$ increasing downwards and $j$ increasing towards the right.
Fig. 1: A Sudoku puzzle with blocks of size $m \times n = 2 \times 3$. The cell indices $(i,j)$ are shown inside every cell, and the block indices $(I,J)$ are shown on the left and top sides of the grid respectively. Both the height (number of rows) and width (number of columns) of the puzzle are equal to $N = m n = 6$. The puzzle has $n = 3$ blocks along the vertical direction and $m = 2$ blocks along the horizontal direction.
Let us then define $N^3$ variables as follows: $x_{ijk}$ is an integer variable which is restricted to be either $0$ or $1$, with $1$ meaning the value at cell $(i,j)$ is equal to $k$, and $0$ meaning the value at cell $(i,j)$ is not $k$. Rule (1) above, i.e., the requirement that each $k \in [1,N]$ must appear exactly once per row, can be expressed as: $$\sum_{j=1}^N x_{ijk} = 1 \quad \textrm{ for } \quad i,k = 1,2,\ldots,N \label{post_25ea1e49ca59de51b4ef6885dcc3ee3b_row_constraint}$$ In other words, for a fixed row $i$ and a fixed $k \in [1,N]$, only a single $x_{ijk}$ will be $1$ on that row for $j = 1, 2, \ldots, N$.
If the fact that the constraints above do not have any "$\leq$" is bothering you, remind yourself of the fact that $x = a$ can be expressed as $a \leq x \leq a$, which in turn is equivalent to the combination of $-x \leq -a$ and $x \leq a$, i.e., any equality constraint can be expressed as a pair of "less-than-or-equal-to" constraints like the ones we need in Linear Programming problems.
Rule (2), i.e., the requirement that each $k \in [1,N]$ must appear exactly once per column, can be expressed as: $$\sum_{i=1}^N x_{ijk} = 1 \quad \textrm{ for } \quad j,k = 1,2,\ldots,N \label{post_25ea1e49ca59de51b4ef6885dcc3ee3b_column_constraint}$$ Expressing rule (3), i.e., the requirement that each $k \in [1,N]$ must appear exactly once per block, is a bit more complicated. A way to simplify this task is by assigning pairs of indices $(I, J)$ to each block in a similar way as we did for cells (see figure 1): $(I,J) = (1,1)$ represents the top-left block, and $(I, J) = (n,m)$ represents the bottom-right block, with $I$ increasing downwards and ranging from $1$ to $n$ (there are $n$ blocks along the vertical direction) and $J$ increasing towards the right and ranging from $1$ to $m$ (there are $m$ blocks along the horizontal direction). Block $(I,J)$ will therefore contain cells with row indices $i = (I-1)m + 1, \ldots, Im$ and column indices $j = (J-1)n + 1, \ldots, Jn$. Therefore, rule (3) can be expressed as: $$\sum_{i=(I-1)m + 1}^{Im} \sum_{j=(J-1)n + 1}^{Jn} x_{ijk} = 1 \quad \textrm{ for }\quad \left\{ \begin{matrix} \; I = 1,2,\ldots,n \\[5pt] \; J = 1,2,\ldots,m \\[5pt] \; k = 1,2,\ldots,N \end{matrix} \right. \label{post_25ea1e49ca59de51b4ef6885dcc3ee3b_block_constraint}$$ Notice that both equations \eqref{post_25ea1e49ca59de51b4ef6885dcc3ee3b_row_constraint} and \eqref{post_25ea1e49ca59de51b4ef6885dcc3ee3b_column_constraint} represent $N^2$ constraints each. As it turns out, equation \eqref{post_25ea1e49ca59de51b4ef6885dcc3ee3b_block_constraint} contains $nmN = N^2$ constraints as well.
So far, our formulation does not prevent $x_{ijk}$ from being equal to $1$ for two or more distinct values $k$ at the same cell $(i,j)$. We need therefore to impose these constraints by hand: $$\sum_{k=1}^N x_{ijk} = 1 \quad \textrm{ for } \quad i,j = 1,2,\ldots,N \label{post_25ea1e49ca59de51b4ef6885dcc3ee3b_cell_constraint}$$ Not all cells are initially empty on a Sudoku puzzle. Some cells will already contain values at the beginning, and those values are necessary so that the solution to the puzzle can be deduced logically (ideally, there should be a single valid solution). Let $\mathcal{C}$ be the set of tuples $(i,j,k)$ representing the fact that a cell $(i,j)$ contains the value $k$ at the beginning. We then have: $$x_{ijk} = 1 \quad \textrm{ for } (i,j,k) \in \mathcal{C} \label{post_25ea1e49ca59de51b4ef6885dcc3ee3b_initial_puzzle_constraint}$$ Our last set of constraints limits the values which each variable $x_{ijk}$ can take: it's either $0$ or $1$ (our ILP formulation then technically defines a Binary Integer Linear Programming problem, or BILP for short): $$0 \leq x_{ijk} \leq 1 \quad \textrm{ for } \quad i,j,k = 1,2,\ldots,N \label{post_25ea1e49ca59de51b4ef6885dcc3ee3b_binary_constraint}$$ Since most ILP solvers allow bounds on the values which each $x_{ijk}$ can take to be set directly, this last set of constraints often does not need to be specified in the same manner as the previous ones.
We now have a complete ILP formulation of a Sudoku puzzle: our goal is to minimize the objective function $f(x_{111}, \ldots, x_{ijk}, \ldots x_{NNN}) = 0$ subject to all constraints specified on equations \eqref{post_25ea1e49ca59de51b4ef6885dcc3ee3b_row_constraint}, \eqref{post_25ea1e49ca59de51b4ef6885dcc3ee3b_column_constraint}, \eqref{post_25ea1e49ca59de51b4ef6885dcc3ee3b_block_constraint}, \eqref{post_25ea1e49ca59de51b4ef6885dcc3ee3b_cell_constraint}, \eqref{post_25ea1e49ca59de51b4ef6885dcc3ee3b_initial_puzzle_constraint} and \eqref{post_25ea1e49ca59de51b4ef6885dcc3ee3b_binary_constraint}.
After solving the ILP problem outlined above, the solution to the Sudoku puzzle can be constructed directly by placing, at each cell $(i,j)$, the value $k$ such that $x_{ijk} = 1$.
|
2017-06-24 20:48:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42490237951278687, "perplexity": 905.4099911375645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320338.89/warc/CC-MAIN-20170624203022-20170624223022-00695.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/prealgebra/prealgebra-7th-edition/chapter-4-section-4-2-factors-and-simplest-form-vocabulary-readiness-video-check-page-234/6
|
# Chapter 4 - Section 4.2 - Factors and Simplest Form - Vocabulary, Readiness & Video Check: 6
cross products
#### Work Step by Step
When fractions are equivalent, the cross products are equivalent. The term cross product is derived from the X shape of the numbers being multiplied. $\frac{a}{b}=\frac{x}{y}$ $\frac{a}{b}\nearrow\!\!\!\!\!\!\searrow\frac{x}{y}$ $\Rightarrow ay=bx$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2018-04-26 11:31:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.611755907535553, "perplexity": 1522.8634246233787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948126.97/warc/CC-MAIN-20180426105552-20180426125552-00299.warc.gz"}
|
http://www.birs.ca/events/2013/5-day-workshops/13w5069/videos/watch/201307020906-Duncan.html
|
Video From 13w5069: Water waves: computational approaches for complex problems
Tuesday, July 2, 2013 09:06 - 09:48
The Cross-Stream Structure of the Crests of Breaking Waves
|
2017-09-23 07:34:02
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8004361987113953, "perplexity": 7439.600352383297}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689572.75/warc/CC-MAIN-20170923070853-20170923090853-00564.warc.gz"}
|
https://mathematica.stackexchange.com/questions/259416/create-predicate-function-like-primeq-fibonacciq-pythgoreanq
|
# Create Predicate Function like PrimeQ :: fibonacciQ, pythgoreanQ
Yes, just define the function. i.e. Use the Fibonacci numbers. What are good practices to perform it? (Vague, let's get specific.)
1. Create list of Fib.s 0, to 1,000,000.
2. Make function _Integer parameter, name fibonacciQ.
3. Error trap with message if < 0, or > million.
Is that how, to do it? My real GOAL is any {x,y,z} of Pythagorean Triples (not {3,4,5}, or {3,10}, or {3,4,4,5} eg.), only one value tested per call, create a predicate function pythagoreanQ[ ] for them. Likely, load table from disk, when needed in a notebook. Is that my only practical alternative, unless computing pythags 3 .. 1,000,000 directly every time?
• fibQ = TrueQ[# == Fibonacci@Round[Log[GoldenRatio, Sqrt[5] #]]] &? pythagQ = TrueQ[#1^2 + #2^2 == #3^2] & or pythagQ = TrueQ[{1, 1, 1} . #^2 == 0] &? Dec 8, 2021 at 4:04
• "not {3,4,5}, or {3,10}, or {3,4,4,5}" -- does that mean that pythagoreanQ[{3,4,5}] should return False? "Select[ {3,13, 100, 17} returns {True, True, True, True}" seems to make no sense since {3,13,100,17} is not a triple at all. Also Select[] is a built-in function that does not return a list of True unless the input contained True. Do you mean pQ[n] should return True if n is a member of any Pythagorean triple? Dec 8, 2021 at 14:29
• Last part yes, "pQ[n] should return True if n is a member of any Pythagorean triple? pQ[x,y,z] is an invalid form, just as pQ[x,z] would be. ** A simple Select[ data, pQ[ #] &] is how it will be called. Dec 8, 2021 at 16:46
• You should note that if $n$ is an integer greater than $2$, there always exist positive integers $x,y,z$ such that $x^2+y^2=z^2$ and $x=n$. Dec 8, 2021 at 18:08
PythagoreanQ[{x_Integer, y_Integer, z_Integer}] :=
|
2023-03-21 07:28:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2771511375904083, "perplexity": 5497.214957570457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943637.3/warc/CC-MAIN-20230321064400-20230321094400-00444.warc.gz"}
|
https://www.math.colostate.edu/~king/codex/abstracts/wu2022.html
|
# Codes and Expansions (CodEx) Seminar
## Hau-Tieng Wu (Duke University)Some recent progress in diffusion based manifold learning and its applications
Diffusion based machine learning algorithms have been actively developed and applied to analyze complicated datasets in past decades. However, there are still many interesting challenging problems left. I will share some recent progress, particularly the $L^\infty$ spectral convergence and robustness to heterogeneous and colored noise under the manifold setup. Its application to spatiotemporal analysis for biomedical nonstationary signals will be demonstrated as a motivating example. If time permits, some analyses under the kernel random matrix setup with the bandwidth selection problem will be discussed.
|
2023-03-30 23:19:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5754826664924622, "perplexity": 1444.400146165248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949506.62/warc/CC-MAIN-20230330225648-20230331015648-00349.warc.gz"}
|
http://www.solutioninn.com/an-independentmeasures-study-comparing-four-treatment-conditions-with-a-sample
|
# Question
An independent-measures study comparing four treatment conditions with a sample of n = 8 in each condition produces sample means of M1 = 2, M2 = 3, M3 = 1, and M1 = 6.
a. Compute SS for the set of 4 treatment means. (Treat the means as a set of n = 4 scores and compute SS.)
b. Using the result from part a, compute n(SSmeans). Note that this value is equal to SSbetween (see Equation 12.6).
c. Now, find the 4 treatment totals and compute SSbetween with the computational formula using the T values (see Equation 12.7). You should obtain the same result as in part b.
Sales1
Views94
|
2016-10-28 21:47:27
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8686186671257019, "perplexity": 1128.0637759954059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988725475.41/warc/CC-MAIN-20161020183845-00339-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://scicomp.stackexchange.com/questions/19749/correct-way-of-computing-norm-l-2-for-a-finite-difference-scheme
|
# Correct way of computing norm $L_2$ for a finite difference scheme
I am computing the rate of convergence of my finite difference scheme in norm $L_2$. Which is the correct way to compute it? This:
\begin{align} L_2 &= \frac{1}{N}\sqrt{\sum_{j=1}^N(u^{numerical}_j-u^{exact}_j)^2} \;\;\;\;\;\;\;\;\;\;\;\; (1)\\ \end{align}
or this:
\begin{align} L_2 &= \sqrt{\frac{1}{N}\sum_{j=1}^N(u^{numerical}_j-u^{exact}_j)^2} \;\;\;\;\;\;\;\;\;\;\;\; (2)\\ \end{align}
where $u^{numerical}$ is the computed along-x velocity at velocity point $j$, with $j=1..N$, and $u^{exact}$ is the analytical solution. In some publications I saw the second option, but that clashes against the fact that norm $L_2$ should be always smaller than norm $L_1$ (see https://math.stackexchange.com/questions/245052/showing-that-l2-norm-is-smaller-than-l1 ). However here norms are "scaled" so I guess that inequality should hold only for the unscaled version. Note that if one uses (1) the inequality still holds. In Leveque's book a pag 140 the error is defined as:
\begin{align} L_2 &= \sqrt{\Delta{x} \sum_{j=1}^N(u^{numerical}_j-u^{exact}_j)^2} \;\;\;\;\;\;\;\;\;\;\;\; (3)\\ \end{align}
Although this would be strictly true for a finite volume method, if we extend it to a finite difference method and to two dimension, by division by the total active area of the domain one gets equation (2). This question is related to question Why a finite difference scheme would give second order of accuracy in norm L2 but 1.5 with L1 (while 1 with Linf)? , since I see that the order of accuracy is larger than the thoretical value of 2 if I use equation (1), while it is smaller if I use equation (2). Which one is the best formulation?
• Why the $1/N$ coefficient? – horchler May 25 '15 at 22:24
• Well you need to scale the norm, if you remove 1/N you cannot compute the rate of convergence, For example if one refines the grid by halving the grid spacing in x and y, one has four times the number of grid cells and if the error is 4 times smaller in each point => L2 stays constant. – Millemila May 25 '15 at 22:35
Technically the $L^2$ norm (upper case "L") is an integral norm of a function defined as $$\left|\left|f(x)\right|\right|_2 = \sqrt{ \int_\Omega |f(x)|^2 dx}.$$ I'm sure you meant $l^2$.
What you typically want to compute in the context of convergence of numerical methods is the finite dimensional analog of the $L^2$ norm which is an area-normalized version of the $l^2$ norm, meaning $$\left|\left|f\right|\right| = \sqrt{\sum_i \left|f_i\right|^2 \Delta x_i}$$ Note that $\Delta x$ is inside the square root.
In practice, the area of the domain is just a constant that you don't really care about and the step size ($\Delta x$) is often a constant so you can just use your equation (2). Equation (1) is off by a factor of $1/\sqrt{N}$ so you will underestimate your error for large $N$.
• If you are comparing functions on different meshes, you want to use the form that includes $N$ or $\Delta x$. – Bill Barth May 26 '15 at 1:29
|
2019-12-11 03:53:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9839235544204712, "perplexity": 355.2787508010884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529745.80/warc/CC-MAIN-20191211021635-20191211045635-00426.warc.gz"}
|
http://math.stackexchange.com/questions/50343/how-to-compute-the-residue-of-a-complex-function-with-essential-singularity
|
# How to compute the residue of a complex function with essential singularity
I'm a student of mechanical engineering and I have a problem with computing residues of a complex function. I've read some useful comments in the other threads. Now I've got some ideas about essential singularity and series expansion in computing the residue. However, I still can't find the solution to my problem. I arrived at a complex function in the process of finding a solution to a mechanical problem. Then I had to obtain the residues to proceed to the next steps. The function has the following form:
$$f(z)=\frac{\exp(Az^N+Bz^{-N})}{z}$$
where $A$, $B$ and $N$ are real constants $(N \geq 3)$.
I want to compute the resiude at $z=0$. I wrote the Laurent series of $f$, but got an infinite sum. I do not even know if I am at the right direction. I would be really thankful if someone could give me a hint on this and put me back in the right direction.
-
I formatted your post in order to make it more readable. I guess you want $N$ to be a natural number, else your $f$ isn't defined in a neighborhood of zero. – t.b. Jul 8 '11 at 15:24
What do you mean by "I got an infinite sum"? The Laurent series is of course going to be an infiite sum, but I don't think that is what you meant... – Mariano Suárez-Alvarez Jul 8 '11 at 15:26
@Theo Buehler: Thanks for modification. And as you guessed, N is a natural number (integer). – Benyamin Gholami Jul 11 '11 at 6:27
@Mariano Suárez-Alvarez♦: You are right. I meant the infinite sum does not converge. However, with the help of friends here I'm at the right direction now. – Benyamin Gholami Jul 11 '11 at 6:30
From each even term in the exponential series, you get one contribution where the positive and negative powers of $z$ cancel out. This is the middle term of the binomial expansion, and to get the coefficient $a_{-1}$ in the Laurent series you need to sum this over all even terms, which leads to
$$a_{-1}=\sum_{k=0}^\infty\frac1{(2k)!}\binom{2k}{k}A^kB^k=\sum_{k=0}^\infty\frac{(AB)^k}{k!^2}\;.$$
-
Thank you for your answer. – Benyamin Gholami Jul 11 '11 at 7:47
It's probably worth point out that $\sum w^k/(k!)^2$ is the Bessel function $J_0(2i w)$. en.wikipedia.org/wiki/Bessel_function – David Speyer Jul 15 '11 at 16:04
@David Speyer: Thank you for your helpful comment. – Benyamin Gholami Jul 16 '11 at 20:15
@David: Actually, it's more convenient to use the modified Bessel function to represent that series: $I_0(2\sqrt{w})$... – J. M. Jul 17 '11 at 11:28
I'm assuming $N$ is an integer here to avoid issues with branches of the logarithm.
Note that $\exp(Az^N) = \sum_{m=0}^{\infty} {A^mz^{mN} \over m!}$ and $\exp(Bz^{-N}) = \sum_{m=0}^{\infty} {B^mz^{-mN} \over m!}$, so that $\exp(Az^N + Bz^{-N})$ is the product of these two series, and ${\displaystyle{\exp(Az^N + Bz^{-N}) \over z}}$ is the product of these two series divided by $z$.
The residue at $z = 0$ will be the cofficient of the ${1 \over z}$ term in this product divided by $z$, or in other words the constant term in the product. A constant term is obtained in the multiplication when you multiply a term ${A^mz^{mN} \over m!}$ by a term ${B^mz^{-mN} \over m!}$ (with the same $m$). So the constant term in the overall product is the sum of these, that is, $\sum_{m=0}^{\infty} {(AB)^m \over (m!)^2}$ and this will be your residue.
-
Thank you both for your good answers. Your comments really helped. In the process of my solution I arrived at a contour integral. Then I had to use the residue theorem to solve the integral. Also I arrived at another integral with the following format: $$f(z)=\frac{\exp(Az^N+Bz^{-N})}{z-a}$$ where a is a constant and can be in/or out of the closed contour. My problem is that how I should obtain the residue in both cases (when a is in and out of the contour). Can I use the Laurent expansion like the previous problem? How the singularity at z=0 affect the solution? – Benyamin Gholami Jul 11 '11 at 7:43
|
2014-09-01 14:42:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9470990300178528, "perplexity": 215.53133028570554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535919066.8/warc/CC-MAIN-20140901014519-00222-ip-10-180-136-8.ec2.internal.warc.gz"}
|
http://nbviewer.jupyter.org/urls/umich.box.com/shared/static/y1fw0iameuixrq9zt02d.ipynb
|
# Generalized Estimating Equations¶
Key ideas: GEE, ordinal data
This notebook demonstrates how ordinal regression models can be estimated using GEE. Ordinal regression models have a dependent variable that takes on finitely many values that are ordered.
The data are from a study of A-level exam scores in the UK. A student receives one of 6 possible ordered scores on the exam, so the data are ordinal. The students are clustered based on the school that they attended prior to taking the exam. In addition to gender and age, we also have access to the student's score on a previous exam. We can use regression analysis to assess how these three predictors are related to the score on the A-level exam. GEE allows us to do this while accounting for the ordered structure of the response, and the (presumed) non-independence among students who attend the same school. This non-independence could be driven by socioeconomic and other factors that cluster within neighborhoods or schools.
The data are available from: http://www.bristol.ac.uk/cmm/learning/support/datasets/
First we import all the libraries that we will need:
In [15]:
import numpy as np
import pandas as pd
import statsmodels.api as sm
The data are available from the link http://www.bristol.ac.uk/cmm/learning/support/datasets/ as the archive momeg.zip, which contains several files. Here we will focus on the chemistry scores, contained in the file a-level-chemistry.txt.
In [16]:
data = pd.read_csv("a-level-chemistry.txt", header=None, sep=' ')
data.columns = ["Board", "A-Score", "Gtot", "Gnum",
"Gender", "Age", "Inst_Type", "LEA",
"Institute", "Student"]
print data.describe()
Board A-Score Gtot Gnum Gender Age Inst_Type LEA Institute Student
0 3 4 53 8 1 3 7 1 1 23
1 3 10 61 8 1 -3 7 1 1 24
2 3 10 58 8 1 -4 7 1 1 26
3 3 10 60 8 1 -2 7 1 1 28
4 3 8 58 9 1 -1 7 1 1 30
Board A-Score Gtot Gnum Gender \
count 31022.000000 31022.000000 31022.000000 31022.000000 31022.000000
mean 4.156631 5.812585 55.951196 8.870157 0.443556
std 1.975071 3.319167 11.061807 1.033699 0.496812
min 1.000000 0.000000 0.000000 1.000000 0.000000
25% 3.000000 4.000000 49.000000 8.000000 0.000000
50% 3.000000 6.000000 56.000000 9.000000 0.000000
75% 6.000000 8.000000 63.000000 9.000000 1.000000
max 8.000000 10.000000 103.000000 14.000000 1.000000
Age Inst_Type LEA Institute Student
count 31022.000000 31022.000000 31022.000000 31022.000000 31022.000000
mean -0.467765 4.936980 83.695216 18.410386 94734.095158
std 3.430142 3.363572 36.940333 17.364621 56578.654087
min -6.000000 0.000000 1.000000 1.000000 23.000000
25% -3.000000 1.000000 52.000000 5.000000 45292.750000
50% -1.000000 5.000000 95.000000 13.000000 91269.500000
75% 3.000000 7.000000 116.000000 26.000000 144722.750000
max 5.000000 11.000000 131.000000 100.000000 196035.000000
Here is a plot of the marginal distribution of exam scores.
In [17]:
sc = np.unique(data["A-Score"])
sc_n = [sum(data["A-Score"] == x) for x in sc]
plt.bar(sc-0.25, sc_n, 0.5)
plt.gca().set_xticks(sc)
plt.gca().set_xticklabels([str(x) for x in sc])
plt.xlim(-1, 11)
plt.xlabel("A-level score", size=17)
plt.ylabel("Number of students", size=17)
Out[17]:
<matplotlib.text.Text at 0x7f6ddd572e90>
We will use the student's score on the GCSE as one predictor of the A-level score. This can be viewed as a control for baseline achievement. The GCSE is a more basic exam than the A-level exam, but there should be a positive association between GCSE and A-level scores.
The GCSE has several components, and not all students take all of the components. The variable 'Gtot' in the data set is the sum of scores over all the components taken by a given student. We divide the sum by the number of components taken to obtain a mean score that is not inflated just because a student takes more exams.
In [18]:
data["GCSE"] = data["Gtot"] / data["Gnum"]
There are two levels of clustering in these data. The "LEA" is a grouping of schools in a local area, and the "Institute" variable is the actual school that a student attends. Institute codes are recycled across the LEA's, so here we create a composite code that reflects both the LEA and the institute.
In [19]:
data["School"] = [str(x) + str(y) for x,y in
zip(data["LEA"], data["Institute"])]
us = set(data["School"])
us = {x: k for k,x in enumerate(list(us))}
data["School"] = [us[x] for x in data["School"]]
These are all the variables in the analysis.
In [20]:
endog = data["A-Score"]
exog = data[["Gender", "Age", "GCSE"]]
groups = data["School"]
We will convert the GCSE score to a Z-score since we don't really care about the scale.
In [21]:
mn = exog.loc[:, "GCSE"].mean()
sd = exog.loc[:, "GCSE"].std()
exog.loc[:, "GCSE"] = (exog.loc[:, "GCSE"] - mn) / sd
As a basic initial model we will treat all the indicators as being independent. Although this is clearly not correct, the GEE estimates and standard errors are still meaningful.
In [22]:
ind = sm.cov_struct.Independence()
model0 = sm.OrdinalGEE(endog, exog, groups, cov_struct=ind)
result0 = model0.fit()
print result0.summary()
print ind.summary()
OrdinalGEE Regression Results
===================================================================================
Dep. Variable: A-Score No. Observations: 155110
Model: OrdinalGEE No. clusters: 2410
Method: Generalized Min. cluster size: 5
Estimating Equations Max. cluster size: 940
Family: Binomial Mean cluster size: 64.4
Dependence structure: Independence Num. iterations: 7
Date: Mon, 12 Jan 2015 Scale: 1.000
Covariance type: robust Time: 00:50:18
==============================================================================
coef std err z P>|z| [95.0% Conf. Int.]
------------------------------------------------------------------------------
I(y>0.0) 3.1493 0.039 80.567 0.000 3.073 3.226
I(y>2.0) 1.9428 0.031 62.975 0.000 1.882 2.003
I(y>4.0) 0.8754 0.028 31.262 0.000 0.821 0.930
I(y>6.0) -0.2568 0.028 -9.070 0.000 -0.312 -0.201
I(y>8.0) -1.6892 0.033 -51.327 0.000 -1.754 -1.625
Gender -0.5722 0.032 -18.041 0.000 -0.634 -0.510
Age -0.0298 0.003 -9.751 0.000 -0.036 -0.024
GCSE 1.7411 0.026 66.114 0.000 1.689 1.793
==============================================================================
Skew: -0.1833 Kurtosis: 0.5139
Centered skew: -0.1453 Centered kurtosis: 0.3663
==============================================================================
Observations within a cluster are modeled as being independent.
Next we fit the same model with an exhangeable covariance structure. This dependence structure treats all binary indicators from a given cluster as having the same pairwise correlation. Since some indicators are "between subjects" and others are "within subjects", this may not reflect the true dependence very well. Here we limit the iterations so there will be a convergence warning.
In [23]:
ex = sm.cov_struct.Exchangeable()
model1 = sm.OrdinalGEE(endog, exog, groups, cov_struct=ex)
result1 = model1.fit(maxiter=5)
print result1.summary()
print ex.summary()
OrdinalGEE Regression Results
===================================================================================
Dep. Variable: A-Score No. Observations: 155110
Model: OrdinalGEE No. clusters: 2410
Method: Generalized Min. cluster size: 5
Estimating Equations Max. cluster size: 940
Family: Binomial Mean cluster size: 64.4
Dependence structure: Exchangeable Num. iterations: 5
Date: Mon, 12 Jan 2015 Scale: 1.000
Covariance type: robust Time: 00:50:28
==============================================================================
coef std err z P>|z| [95.0% Conf. Int.]
------------------------------------------------------------------------------
I(y>0.0) 2.9170 0.046 64.065 0.000 2.828 3.006
I(y>2.0) 1.7610 0.034 51.547 0.000 1.694 1.828
I(y>4.0) 0.7190 0.029 24.401 0.000 0.661 0.777
I(y>6.0) -0.4017 0.029 -13.840 0.000 -0.459 -0.345
I(y>8.0) -1.8439 0.034 -54.260 0.000 -1.910 -1.777
Gender -0.5778 0.026 -21.841 0.000 -0.630 -0.526
Age -0.0305 0.003 -10.403 0.000 -0.036 -0.025
GCSE 1.7143 0.025 67.338 0.000 1.664 1.764
==============================================================================
Skew: -0.1111 Kurtosis: 0.4678
Centered skew: -0.0767 Centered kurtosis: 0.3215
==============================================================================
The correlation between two observations in the same cluster is 0.048
/projects/3c16f532-d309-4425-850f-a41a8c8f8c92/statsmodels-master/statsmodels/genmod/generalized_estimating_equations.py:1078: IterationLimitWarning: Iteration limit reached prior to convergence
IterationLimitWarning)
The dependence is weak, perhaps due to the oversimplicity of the dependence model:
Next we plot the conditional probability distributions of the A-level exam scores for females and males. Age and the GCSE score are held fixed at their mean values.
In [24]:
ev = [{"Gender": 1}, {"Gender": 0}]
fig = result1.plot_distribution(None, ev)
ax = fig.get_axes()[0]
leg = plt.figlegend(ax.get_lines(), ["Females", "Males"], "upper center", ncol=2)
leg.draw_frame(False)
Here is a plot of the fitted distribution of A-level exam scores for females of average age, when the GCSE exam score is either 1 standard deviation below or 1 standard deviation above the mean.
In [25]:
ev = [{"GCSE": -1, "Gender": 1}, {"GCSE": 1, "Gender": 0}]
fig = result1.plot_distribution(None, ev)
ax = fig.get_axes()[0]
leg = plt.figlegend(ax.get_lines(), ["-1SD GCSE", "+1SD GCSE"], "upper center", ncol=2)
leg.draw_frame(False)
Now we can try a more sophisticated dependence model which accounts for the differing correlations between within-subject and between-subject indicators (as well as for different degrees of correlation among the within-subject indicators).
In [29]:
dep = sm.cov_struct.GlobalOddsRatio("ordinal")
Here we fit the model again using the new dependence structure. The parameter estimates from the exchangeable model can be used as starting values. This currently takes quite a while to run to convergence, so here we limit the number of iterations. Since we limit the number of iterations, we get a convergence warning message. We can ignore this message here since we have done this deliberately.
In [ ]:
model2 = sm.OrdinalGEE(endog, exog, groups, cov_struct=dep)
result2 = model2.fit(start_params=result1.params, maxiter=2)
print result2.summary()
OrdinalGEE Regression Results
===================================================================================
Dep. Variable: A-Score No. Observations: 155110
Model: OrdinalGEE No. clusters: 2410
Method: Generalized Min. cluster size: 5
Estimating Equations Max. cluster size: 940
Family: Binomial Mean cluster size: 64.4
Dependence structure: GlobalOddsRatio Num. iterations: 2
Date: Mon, 12 Jan 2015 Scale: 1.000
Covariance type: robust Time: 01:07:51
==============================================================================
coef std err z P>|z| [95.0% Conf. Int.]
------------------------------------------------------------------------------
I(y>0.0) 3.0021 0.036 83.751 0.000 2.932 3.072
I(y>2.0) 1.8451 0.028 65.154 0.000 1.790 1.901
I(y>4.0) 0.7979 0.025 31.969 0.000 0.749 0.847
I(y>6.0) -0.3360 0.025 -13.689 0.000 -0.384 -0.288
I(y>8.0) -1.8018 0.027 -66.548 0.000 -1.855 -1.749
Gender -0.5759 0.025 -23.013 0.000 -0.625 -0.527
Age -0.0297 0.003 -11.060 0.000 -0.035 -0.024
GCSE 1.7267 0.021 83.945 0.000 1.686 1.767
==============================================================================
Skew: -0.1419 Kurtosis: 0.4969
Centered skew: -0.1056 Centered kurtosis: 0.3495
==============================================================================
/projects/3c16f532-d309-4425-850f-a41a8c8f8c92/statsmodels-master/statsmodels/genmod/generalized_estimating_equations.py:1078: IterationLimitWarning: Iteration limit reached prior to convergence
IterationLimitWarning)
We have already captured much more of the dependence structure than when working with the exchangeable correlation. However due to the large sample size, this does not impact the parameter estimates or standard errors very much.
In [ ]:
print dep.summary()
Global odds ratio: 1.713
|
2018-03-20 04:10:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5934441089630127, "perplexity": 5802.593175674084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647280.40/warc/CC-MAIN-20180320033158-20180320053158-00731.warc.gz"}
|
https://chemistry.stackexchange.com/tags/surfactants/hot
|
# Tag Info
26
It boils down to the definition of soap. Wikipedia defines a soap as the salt of a fatty acid. IUPAC claims the smallest fatty acid can be considered to have 4 carbons. Therefore the simplest soap molecule would be a (generally sodium) salt of butyric (butanoic) acid, i.e. sodium butyrate. Now apart from the chemical definition, a soap must adhere to its ...
18
Quoting a reddit post from chemist nallen: The short answer is that the dirt and oils from your hair compete for the surfactants making them less available to form lather, which is small bubbles. To better understand the mode of action, you have to know a bit about the formulation of shampoos and the nature of dirt and oils. Dirts and oils deposit on hair ...
14
The froth has little or no effect on the detergent action. In fact detergent manufacturers have to add anti-foaming agents to stop excessive foam generation in automatic washing machines. Froth/foam is generated because surfactants in the detergent adsorb at the air water interface and stop the water film that makes up the walls of the bubbles from ...
14
I don't remember to have seen the term tensid outside the context of the German language. The term is typically applied to agents that decrease the surface tension (hence tensid) of water. Note that the logic behind the English equivalent surfactant is just the same. So, in summary tensid and surfactant mean the same.
11
The color of the precipitate is strongly reminiscent of copper(II) hydroxide. I hypothesize: Chloride corrosion of copper from exposed brass (ref 1 || ref 2) a. Per ref 2 above, sulfate may also participate Alkaline precipitation of $\ce{Cu(OH)2}$ in the neutrally-buffered soap Insufficient $\ce{Na_4EDTA}$ to complex the amount of $\ce{Cu}$ corroded from ...
10
A concept that captures how effective a detergent is at doings its job is aptly called "detergency." As might be expected this is a complex property and difficult to describe unambiguously with a single parameter. Quoting Ref. 1 : Detergency is difficult to evaluate because it depends on a multitude of variables that in most cases are ...
7
The diazo dye (1), known as Solvent Yellow 124 or Somalia Yellow is soluble in aliphatic hydrocarbons. In the European Community, it is used to mark low-taxed diesel fuel (for heating purposes). In order to prevent its use in cars, samples are acidified. This results in acetal cleavage and protonation of the diazo-fragment, yielding the red, water-soluble ...
7
Volatile surfactants There are many molecules that are both surfactants (in water) and volatile, but of the ones I can think of, none can be used safely in home laundry applications. Hydrocarbon derivatives. The examples you found by googling are in this class. 3,5-dimethyl-1-hexyn-3-ol is the principal component of Surfynol 61 (probably named Surfynol ...
7
While I agree that the definition matters I disagree with the definition of soap as "as the salt of a fatty acid". For one SDBS, AOT, SDS, Cocamide DEA (not even a salt) and CTAB are all popular soaps that are not fatty acid salts. I also do not think that soap and surfactant are interchangeable words. Surfactants lower surface tension, soaps create micelles ...
7
Soapiness or anything like that cannot be represented by a single number. Hence no point in inventing such a quantity. Just like we cannot associate a plain number to odors, soapiness is scientifically meaningless because it will be an umbrella term. Just like the term polarity is misused, soapiness could be even worse. The only common property of ...
6
I am wondering if there are surfactants that exist in a gas state that can be used to reduce the surface tension of a liquid. The most direct answer is "no". Gases do not have surface tension and gas-phase molecules do not chemically affect the surface tension of liquids. I know its a broad question, so for example, are there any gases that could reduce ...
6
Yes, this is a good explanation. Rain water is almost pure water, it lacks of bivalent ions such as $\ce{Ca^{2+}}$ and $\ce{Mg^{2+}}$ which helps soap to rinse of. You will get the same effect when a water softener is installed on water distribution.
5
SLS is highly soluble in water (100g/L) Therefore, in the specific case of your teeth having lots of bubbles around them after brushing, you should simply gargle your mouth with water to remove the excess SLS.
5
Your goal is to reduce the surface tension of the water so that it does not support the formation of large bubbles or inhibit the whetting of the dirt particles. When the bubbles are small, the dust inside will be more likely to make contact with the liquid surface of the bubble within the lifetime of the bubble. Also, when it does make contact, it will ...
5
According to Scott and Jones1: ... problems arise with branched alkyl chains, a side chain methyl group or a gem-dimethyl-branched chain cannot undergo β-oxidation by microorganisms and must be degraded by loss of one carbon atom at a time (α-oxidation). * text by the authors, highlighting and hyperlinking by me. A study by Whyte et al2 on the degradation ...
4
The micellization releases water molecules which initially solvate the hydrophilic head groups. The release of these waters greatly increases the entropy. Reference: Dong et al., Chem. Rev. 2010, 110, 4978-5022.
4
CTAB, aka Hexadecyltrimethylammonium bromide DTAB, aka Dodecyltrimethylammonium bromide Sort of a vague question, don't know where to start. To begin, CTAB has a hydrocarbon chain length of 16 (as its name suggests) whereas DTAB has a -CH2- chain length of 12. Otherwise they have the same hydrophilic/polar "head"; all this is obvious, you can tell just by ...
4
I'm afraid no: you are applying a chemical principle to a complex biological response. The Le Chatelier principle state: If a chemical system at equilibrium experiences a change in concentration, temperature, volume, or partial pressure, then the equilibrium shifts to counteract the imposed change and a new equilibrium is established. We start with the ...
4
Can I wash my hair with dishwashing agent [...]! If it's an agent for dishwashing machines: Please DON'T! Cleaning in dishwashing machines is performed in alkaline medium. The tabs usually contain (among others): alkaline compounds, such as sodium carbonate and sodium hydroxide bleaching agents, such as sodium perborate and sodium percarbonate ...
4
Tap water contains calcium and magnesium ions; more so if the water is hard. These ions can bind to soap over time and make an insoluble soap curd. Is the sponge clean? If not, another reason is that the surfactants in the soap that make the foam might be forming micelles around whatever grime is in your sponge, so that there aren't many left to make the ...
4
There's no technical reason why you couldn't make a powdered dishwashing detergent and I suspect that someone probably does sell it, just as there are liquid, tablet, and powdered automatic dishwasher detergents and laundry detergents. It does seem more convenient to use a liquid detergent for washing dishes in the sink. A liquid doesn't need time to ...
4
When you wash a piece of cloth, even without soap, it's better to agitate. The vibration, friction, etc. will help speed up the process of mixing the dirt/grease with water. Even the very definition of "agitation" uses the example of washing. From Vocabulary.com Agitation is the act of stirring things up, like the agitation of a washing machine that ...
4
I don't have a lot of experience with salicylic acid as I'm just starting to work with it myself at the moment. However, I find that it disperses better if it is added to the solution at a high temperature (before surfactants) and then homogenised (once thickening agents, if any, have been added). The other thing which may help is by dissolving in denatured ...
4
I hope this picture helps. The interaction between a molecule of water and an ion is stronger than the hydrogen bonding that occurs between two water molecules.
4
Cationic micelles similar to your examples can and do exist. For example cetyl trimethylammonium bromide (Growth of Cationic Micelles in the Presence of Organic Additives, P. A. Hassan and J. V. Yakhmi, Langmuir, 2000, 16 (18), pp 7187–7191, DOI: 10.1021/la000517o). Additionally, there are cationic micelle detergents (Ionization of cationic micelles: ...
4
I don't know what you mean by the term "greasy", but here's a possible explanation: When Dead Sea water evaporates on skin, then salts will precipitate when their maximum solubility is reached. The mixture of tiny salt crystals formed, and water might be responsible for a greasy sensation.
3
The modern definition of washing is "washing can be defined as both the removal by water or aqueous surfactant solution of poorly soluble matter and the dissolution of water-soluble impurities from textile surfaces". This is in the book "Laundry Detergents" written by E. Smulders. There is a whole chapter on the physical chemistry of the washing process. You ...
3
A lemon has an oil that is present mostly in the skin. For instance if you squeeze lemon into hot tea in a styrofoam cup and let the tea sit, then you'll see a surface roughness at the liquid level. The oil actually dissolves the styrofoam. The oil acts a defoamer. So with a styrofoam cup you're suppose to use "commercial" lemon juice which has no oil (lest ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2021-10-22 10:56:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47628161311149597, "perplexity": 2324.030746239649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585504.90/warc/CC-MAIN-20211022084005-20211022114005-00379.warc.gz"}
|
https://hozy.dev/writings/discontinuities-on-the-real
|
# Discontinuities of a real function
##### December 28, 2022
Let $f\colon\R\to\R$ be a real function. Then, $f$ is continuous at $a$ if and only if for all $\varepsilon>0$, there exists a $\delta>0$ such that $|f(x)-f(a)|<\varepsilon$ for all $x$ with $|x-a|<\delta$. Our task today is to investigate the discontinuities of $f$, i.e. the set $D(f)$ of points at which $f$ is not continuous. Specifically, we want to answer two questions:
1. What type of set can $D(f)$ be?
2. Given any set $A$, is there a function $f$ that is discontinuous exactly on $A$?
## Back and forth
We first need to know what it means for $f$ to be discontinuous at a point. By negating the definition of continuity, we can deduce that
$f$ is discontinuous at $a$ if and only if there exists an $\varepsilon>0$ such that for all $\delta>0$, we have $|f(x)-f(a)|\ge\varepsilon$ for some $x$ with $|x-a|<\delta$. (*)
The definition should make sense to you: it says that as we zoom in closer to $a$, the function $f$ stops getting closer to $f(a)$ at some point, instead staying outside a "barrier" around $f(a)$.
Given a set $I$, the oscillation of $f$ on $I$ is
$\omega(f;I)=\sup\{ |f(x)-f(y)|\colon x,y\in I \}.$
The geometric meaning of the oscillation is hopefully clear. Note that the oscillation is always nonnegative, and if $J\subset I$, then $\omega(f;J)\le\omega(f;I)$. Thus, as $I$ "shrinks" toward $a$, the oscillation $\omega(f;I)$ decreases to some nonnegative limit. We can now define the oscillation of $f$ at $a$ by
$\omega_f(a) = \lim_{h\to 0^+} \omega(f; (a-h,a+h)).$
If $\omega(f;I)$ is how much $f$ "jumps" on the set $I$, then $\omega_f(a)$ is how much $f$ jumps at $a$. The definition (*) above can be rewritten into
$f$ is discontinuous at $a$ if and only if there exists an $\varepsilon>0$ such that for all $\delta>0$, we have $\omega_f(x, (a-\delta,a+\delta))\ge\varepsilon$.
We can now conclude that $f$ is discontinuous at $a$ if and only if $\omega_f(a)>0$.
## Open sesame
The preceding observation allows us to write $D(f)$ as the set of $a$ for which $\omega_f(a)>0$. We need one final detail.
Lemma. For any $r>0$, the set $\{a\colon\omega_f(a)< r\}$ is open.
Proof. Given $a$ such that $\omega_f(a), there is a bounded interval $I$ containing $a$ such that $\omega(f;I). Consequently, $I$ is a subset of $\{a\colon\omega_f(a)< r\}$, since for any $x\in I$, we have $\omega_f(x)\le\omega(f; I). $\square$
With this lemma, we can finally prove a characterization of $D(f)$.
Theorem. $D(f)$ is the countable union of closed sets in $\R$.
Proof. The first part of this characterization is easy enough:
\begin{align*} D(f) &= \{a\colon \omega_f(a) > 0\} \\ &= \bigcup_{n=1}^\infty \{a\colon \omega_f(a)\ge 1/n\}. \end{align*}
The complement of $\{a\colon \omega_f(a)\ge 1/n\}$ is $\{a\colon \omega_f(a)< 1/n\}$, which is open. Thus, $D(f)$ is a countable union of closed sets. $\square$
This result holds in a general metric space. In a topological space, a countable union of closed sets is called an $F_\sigma$ set.1 We now know that $D(f)$ is an $F_\sigma$ set, but is every $F_\sigma$ set a $D(f)$ for some $f$?
## The imitation game
The answer to the question above is "yes": any $F_\sigma$ set in $\R$ is the set of discontinuities for some function $f$. We shall construct $f$ by trying to follow the characterization in our proof of the major theorem in the previous section.
Theorem. If $E$ is an $F_\sigma$ set in $\R$, there is some $f\colon\R\to\R$ such that $E=D(f)$.
Proof. Let $E=\bigcup_{i=1}^\infty F_i$, where each $F_i$ is closed. Let $E_1=F_1$ and
$E_n = \left(\bigcup_{i=1}^n F_i\right) \setminus \left(\bigcup_{i=1}^{n-1} F_i\right).$
Note that the $E_n$ are disjoint, $\bigcup_{n=1}^m E_n = \bigcup_{n=1}^m F_n$, and $\bigcup_{n=1}^\infty E_n = E$. Each $E_n$ has a countable dense subset $D_n$ (see later). We now define $f$ on $\R$ as follows:
$f(x) = \begin{cases} 0 & x \in \R\setminus E, \\ 2^{-n+1} & x \in D_n, \\ 2^{-n} & x\in E_n\setminus D_n. \end{cases}$
The function is well-defined since $\R\setminus E$ and the $E_n$ are mutually disjoint. We now consider some $x\in\R$.
Suppose $x\in E$, then $x\in E_n$ for some $n$.
• If $x\in E_n\setminus D_n$, then any open interval containing $x$ contains some $y\in D_n$, as $D_n$ is dense in $E_n$, in which case we have $|f(x)-f(y)|=2^{-n}$.
• If $x\in D_n$, any open interval containing $x$ contains a point $y$ outside $D_n$, since $D_n$ is countable. This point $y$ can be in $E_n\setminus D_n$ ($f(y)=2^{-n}$), or in $\R\setminus E$ ($f(y)=0$), or in some $E_k$ different from $E_n$, in which case we can pick another point in $D_k$ instead ($f(y)=2^{-k+1}$).
In any case, we can always find a point $y$ in this interval such that $|f(x)-f(y)|\ge 2^{-n}$. Altogether, if $x\in E_n$, then $\omega_f(x)\ge 2^{-n}>0$, so $f$ is discontinuous at $x$.
Suppose $x\in \R\setminus E$. For all integer $m>0$, the set $A_m=\bigcup_{n=1}^m E_n=\bigcup_{n=1}^m F_n$ is closed and does not contain $x$. For any positive integer $N$, $\R\setminus A_N$ is open, so there is an open interval $(x-r,x+r)$ disjoint from $A_N$, in which case the interval only contains points in $\R\setminus E$ or in $E_k$ with $k>N$. Hence, for any $y$ with $|x-y|, we have $|f(x)-f(y)|\le 2^{-N}$, and since $N$ is arbitrary, this means the oscillation of $f$ at $x$ is zero. Thus, $f$ is continuous at $x$, and we conclude $D(f)=E$. $\square$
There is one detail we need to work out, and that is each $E_n$ has a countable dense subset, i.e. it is separable. Luckily for us, a subset of a separable metric space is always separable, and $\R$ is indeed separable, so $E_n$ is separable too. This detail will be left an exercise for the reader. After that, it should not be difficult to see that this theorem holds for any separable metric space, not just the reals, and the proof is mostly analogous.
To recap, we answer the two questions put forth from the beginning of the post:
1. What type of set can $D(f)$ be?
It must be a countable union of closed sets, in other words, an $F_\sigma$ set.
2. Given any set $A$, is there a function $f$ that is discontinuous exactly on $A$?
In a separable metric space $M$, for any $F_\sigma$ set, there is a function $f\colon M\to M$ whose $D(f)$ is that set.
## Footnotes
1. $F$ here stands for the French word fermé, meaning closed, while the $\sigma$ means a "sum" of sets, i.e. a union.
|
2023-03-29 02:57:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 170, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9904291033744812, "perplexity": 198.5099722220345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00209.warc.gz"}
|
https://forthright48.com/bitwise-operators/
|
# Bitwise Operators
We know about arithmetic operators. The operators $+,-,/$ and $\times$ adds, subtracts, divides and multiplies respectively. We also have another operator $\%$ which finds the modulus.
Today, we going to look at $6$ more operators called “Bitwise Operators”. Why are they called “Bitwise Operators”? That’s because they work using the binary numerals (bits, which are the individual digits of the binary number) of their operands. Why do we have such operators? That’s because in computers all information is stored as strings of bits, that is, binary numbers. Having operators that work directly on them is pretty useful.
We need to have a good idea how Binary Number System works in order to understand how these operators work. Read more on number system in [ref1] Introduction to Number Systems. Use the $decimalToBase()$ function from [ref1] to convert the decimal numbers to binary and see how they are affected.
# Bitwise AND ($\&$) Operator
The $\&$ operator is a binary operator. It takes two operands and returns a single integer as the result. Here is how it affects the bits.
$0 \ \& \ 0 = 0$
$0 \ \& \ 1 = 0$
$1 \ \& \ 0 = 0$
$1 \ \& \ 1 = 1$
It takes two bits and returns another bit. The $\&$ operator will take two bits $a, b$ and return $1$ only if both $a$ AND $b$ are $1$. Otherwise, it will return $0$.
But that’s only for bits. What happens when we perform a $\&$ operation on two integer number?
For example, what is the result of $A \ \& \ B$ when $A = 12$ and $B = 10$?
Since $\&$ operator works on bits of binary numbers we have to convert $A$ and $B$ to binary numbers.
$A = (12)_{10} = (1100)_2$
$B = (10)_{10} = (1010)_2$
We know how to perform $\&$ on individual bits, but how do we perform $\&$ operation on strings of bits? Simple, take each position of the string and perform and operations using bits of that position.
$1100$
$\underline{1010} \ \&$
$1000$
Therefore, $12 \ \& \ 10 = 8$.
In C++, it’s equivalent code is:
printf ("%d\n", 12 & 10 );
# Bitwise OR ($|$) Operator
The $|$ operator is also a binary operator. It takes two operands and returns single integer as the result. Here is how it affects the bits:
$0 \ | \ 0 = 0$
$0 \ | \ 1 = 1$
$1 \ | \ 0 = 1$
$1 \ | \ 1 = 1$
The $|$ operator takes two bits $a, b$ and return $1$ if $a$ OR $b$ is $1$. Therefore, it return $0$ only when both $a$ and $b$ are $0$.
What is the value of $A|B$ if $A=12$ and $B=10$? Same as before, convert them into binary numbers and apply $|$ operator on both bits of each position.
$1100$
$\underline{1010} \ |$
$1110$
Therefore $12|10 = 14$.
printf ( "%d\n", 12 | 10 );
# Bitwise XOR ($\wedge$) Operator
Another binary operator that takes two integers as operands and returns integer. Here is how it affects two bits:
$0 \ \wedge \ 0 = 0$
$0 \ \wedge \ 1 = 1$
$1 \ \wedge \ 0 = 1$
$1 \ \wedge \ 1 = 0$
XOR stands for Exclusive-OR. This operator returns $1$ only when both operand bits are not the same. Otherwise, it returns $0$.
What is the value of $A \wedge B$ if $A=12$ and $B=10$?
$1100$
$\underline{1010} \ \wedge$
$0110$
Therefore, $12 \wedge 10 = 6$.
In mathematics, XOR is represented with $oplus$, but I used $\wedge$ cause in C++ XOR is performed with $\wedge$.
printf ( "%d\n", 12 ^ 10 );
# Bitwise Negation ($\sim$) Operator
This is a unary operator. It works on a single integer and flips all its bits. Here is how it affects individual bits:
$\sim \ 0 = 1$
$\sim \ 1 = 0$
What is the value of $\sim A$ if $A = 12$?
$\sim \ (1100)_2 = (0011)_2 = (3)_{10}$
But this will not work in code cause $12$ in C++ is not $1100$, it is $0000…1100$. Each integer is $32$ bits long. So when each of the bits of the integer is flipped it becomes $11111…0011$. If you don’t take an unsigned int, the value will even come out negative.
printf ( "%d\n", ~12 );
# Bitwise Left Shift ($<<$) Operator
The left shift operator is a binary operator. It takes two integers $a$ and $b$ and shifts the bits of $a$ towards LEFT $b$ times and adds $b$ zeroes to the end of $a$ in its binary system.
For example, $(13)_{10} << 3 = (1101)_2 << 3 = (1101000)_2$.
Shifting the bits of a number $A$ left once is the same as multiplying it by $2$. Shifting it left three times is the same as multiplying the number by $2^3$.
Therefore, the value of $A << B = A \times 2^B$.
printf ( "%d\n", 1 << 3 );
# Bitwise Right Shift ($>>$) Operator
The $>>$ Operator does opposite of $<<$ operator. It takes two integer $a$ and $b$ and shifts the bits of $a$ towards RIGHT $b$ times. The rightmost $b$ bits are lost and $b$ zeroes are added to the left end.
For example, $(13)_{10} >> 3 = (1101)_2 >> 3 = (1)_2$.
Shifting the bits of a number $A$ right once is the same as dividing it by $2$. Shifting it right three times is the same as dividing the number by $2^3$.
Therefore, the value of $A >> B = \lfloor \frac{A}{2^B} \rfloor$.
printf ( "%d\n", 31 >> 3 );
# Tips and Tricks
1. When using $<<$ operator, careful about overflow. If $A << B$ does not fit into $int$, make sure you type cast $A$ into long long. Typecasting $B$ into long long does not work.
2. $A \ \& \ B \leq MIN(A,B)$
3. $A \ | \ B \geq MAX(A,B)$
# Conclusion
That's all about bitwise operations. These operators will come useful during "Bits Manipulation". We will look into it on next post.
# Resource
1. forthright48 - Introduction to Number Systems
1. Wiki - Bitwise Operation
|
2022-10-04 23:28:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49275335669517517, "perplexity": 649.6805902824328}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00101.warc.gz"}
|
https://www.transtutors.com/questions/activity-based-costing-choose-answer-a-b-c-109437.htm
|
# activity Based Costing; choose answer a,b,c
1 answer below »
Daffodil Company produces two products, Flower and Planter. Flower is a high-volume item totaling 20,000 units annually. Planter is a low-volume item totaling only 6,000 units per year. Flower requires one hour of direct labor for completion, while each unit of Planter requires 2 hours. Therefore, t...
## 1 Approved Answer
3 Ratings, (9 Votes)
• ### week lab 3-2
(Solved) November 15, 2014
,500 Requirements: 1. Compute the manufacturing product cost per unit of each type of bookcase. 2 . Suppose that pre-manufacturing activities, such as product design, were assigned to the standard
#### Answer Preview :
Solution of 1st problem Requirements: 1. Compute the manufacturing product cost per unit of each type of bookcase. Particulars Standard Book Case Unfinished Book Case No. of units produced...
## Recent Questions in Financial Accounting
Submit Your Questions Here!
Copy and paste your question here...
Attach Files
|
2018-09-21 13:32:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3647803068161011, "perplexity": 12815.68987550228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157203.39/warc/CC-MAIN-20180921131727-20180921152127-00331.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/175916-equations-line-three-space-print.html
|
Equations of a line in three space.
• March 26th 2011, 01:49 PM
darksoulzero
Equations of a line in three space.
The question: A line q is defined be the vector equation [x,y,z]=[4,-3,2] +t[1,8,-3]. Answer the following without technology.
b) determine if line q intersects line r, defined by [x,y,z]=[2,-19,9]+s[4,-5,-9].
I'm not sure how I would do this. I tried setting the parametric equations equal to each other and then solving for t and then subbing into one the original but I was getting a crazy answer. I have no idea how would i do this in 3-space. Any help would be appreciated.
• March 26th 2011, 01:57 PM
Plato
Quote:
Originally Posted by darksoulzero
The question: A line q is defined be the vector equation [x,y,z]=[4,-3,2] +t[1,8,-3]. Answer the following without technology.
b) determine if line q intersects line r, defined by [x,y,z]=[2,-19,9]+s[4,-5,-9].
Solve this system $\begin{gathered}
~4 + t = 2 + 4s \hfill \\
- 3 + 8t = 19 - 5s \hfill \\
\end{gathered}$
if the solution also solves $2-3t=9-9s$ then they intersect.
• March 26th 2011, 04:17 PM
darksoulzero
Thanks for the superfast reply. I appreciate your help, but I found out how do it when I googled it.
• March 26th 2011, 04:20 PM
Plato
Quote:
Originally Posted by darksoulzero
Thanks for the superfast reply. I appreciate your help, but I found out how do it when I googled it.
Can you googled it on a test?
• March 26th 2011, 05:00 PM
darksoulzero
Actually, I found another site similar to this one and it had an example explaining the solution. Also, this is just extra practice and I wanted to try and get the answer on my own so I used google.
|
2013-05-23 23:24:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.868427574634552, "perplexity": 809.7267886180714}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704075359/warc/CC-MAIN-20130516113435-00093-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.zigya.com/study/book?class=11&board=bsem&subject=Physics&book=Physics+Part+I&chapter=Motion+in+A+Plane&q_type=&q_topic=Projectile+Motion&q_category=&question_id=PHEN11039746
|
 A stone is projected with velocity (√2gh) from the top of building h meter high. Show that it will fall farthest at a distance (2√2)h. from Physics Motion in A Plane Class 11 Manipur Board
### Book Store
Currently only available for.
CBSE Gujarat Board Haryana Board
### Previous Year Papers
Download the PDF Question Papers Free for off line practice and view the Solutions online.
Currently only available for.
Class 10 Class 12
A stone is projected with velocity (√2gh) from the top of building h meter high. Show that it will fall farthest at a distance (2√2)h.
Let the stone be projected at angle θ to make the stone fall at farthest point and (0,R) be the coordinates of point where stone reaches the ground.Â
Â
Here (0, R) lies on the trajectory. Therefore, we have,Â
141 Views
What is a vector quantity?
A physical quantity that requires direction along with magnitude, for its complete specification is called a vector quantity.
835 Views
Give three examples of vector quantities.
Force, impulse and momentum.
865 Views
What are the basic characteristics that a quantity must possess so that it may be a vector quantity?
A quantity must possess the direction and must follow the vector axioms. Any quantity that follows the vector axioms are classified as vectors.Â
814 Views
What is a scalar quantity?
A physical quantity that requires only magnitude for its complete specification is called a scalar quantity.
1212 Views
Give three examples of scalar quantities.
Mass, temperature and energy
769 Views
|
2018-08-19 00:30:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41389456391334534, "perplexity": 4476.4644912848225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213903.82/warc/CC-MAIN-20180818232623-20180819012623-00321.warc.gz"}
|
https://electrichandleslide.wordpress.com/2013/01/
|
# Monthly Archives: January 2013
## Fillings of Contact Manifolds Part 2
Here is the overdue part 2 post on symplectic fillability. In part 1 , we gave several equivalent definitions of weak symplectic fillings, strong symplectic fillings, and Stein fillings. From these definitions we had the following inclusions:
$\{\text{Stein fillable}\} \subseteq \{\text{Strongly sympl. fill.}\} \subseteq \{\text{Weakly sympl. fill.}\}$
Now I want to summarize the results that show that each of these notions are distinct from each other, namely that each of these inclusions is strict, and also mention some conditions that can ensure some of these notions do coincide.
Historically, it was first shown that there are weakly symplectically fillable contact manifolds which are not strongly symplectically fillable by Eliashberg (the paper is called “Unique holomorphically fillable contact structure on the 3-torus”). The examples were contact structures on $T^3$. Identify $T^3=T^2\times S^1$, with coordinates $(x,y,\theta)\in (\mathbb{R}/\mathbb{Z})^3$. Then one can verify that for $n=1,2,3,\cdots$, $\alpha_n=\cos(n\theta)dx+\sin(n\theta)dy$ is a contact form ($\alpha_n\wedge d\alpha_n>0$). Then $\xi_n=\ker(\alpha_n)$ is a contact structure on $T^3$ for each $n=1,2,3\cdots$, and it was shown by Giroux and independently Kanda that these contact structures are not contactomorphic for different values of $n$, and all tight contact structures on $T^3$ are contactomorphic to one of these. The claim is that if $n>1$, $(T^3,\xi_n)$ is not strongly symplectically fillable, but all $(T^3,\xi_n)$ are weakly symplectically fillable.
First we will show that $(T^3,\xi_n)$ is weakly symplectically fillable. View $T^3$ as the boundary of $T^2\times D^2$. Put coordinates $(x,y)$ on the $T^2$ factor, and let $\theta$ parametrize the boundary of $D^2$. We can put a symplectic structure on $T^2\times D^2$ by taking the product of area forms on $T^2$ and $D^2$. We want to show that this is a weak symplectic filling of $(T^3,\xi_n)$. This is not immediately obvious, so instead we look at a family of contact structures, $\xi_n^{\varepsilon}$ which are contactomorphic to $\xi_n$, and show that $(T^2\times D^2,\omega =\omega_{T^2}\oplus \omega_{D^2})$ is a weak symplectic filling of $(T^3,\xi_n^{\varepsilon_0})$ for some $\varepsilon_0$. It is clear that the submanifolds $T^2\times \{p\}$ for $p\in \partial D^2$ are symplectic submanifolds of $T^2\times D^2$, so $\omega$ is positive on the tangent planes to these submanifolds. These tangent planes are described by $\{d\theta =0\}$ on $T^3$. Clearly these planes do not form a contact structure on $T^3$ since they are integrable, but $\omega$ will evaluate positively on a nearby contact structure. With this in mind, set
$\alpha_n^{\varepsilon} = (1-\varepsilon)d\theta +\varepsilon\alpha_n)$
Note that
$\alpha_n^{\varepsilon} = (1-\varepsilon)\varepsilon d\theta \wedge d\alpha_n +\varepsilon^2 \alpha_n\wedge d\alpha_n = \varepsilon^2\alpha_n\wedge d\alpha_n$
so $\xi_n^{\varepsilon}=\ker(\alpha_n^{\varepsilon})$ is a contact structure for any $\varepsilon>0$. When $\varepsilon=1$, $\xi_n^1=\xi_n$ so we have a homotopy through contact structures from $\xi_n$ to any $\xi_n^{\varepsilon}$ for $\varepsilon>0$. By Gray’s theorem there are diffeomorphisms $\phi^{\varepsilon}$ such that $(\phi^{\varepsilon})_*\xi_n^{\varepsilon}=\xi_n$, therefore each $\xi_n^{\varepsilon}$ is contactomorphic to $\xi_n$ for $\varepsilon>0$. When $\varepsilon$ is close to zero, $\xi_n^{\varepsilon}$ is close to $\ker(d\theta)$. Therefore, since $\omega|_{\xi}>0$ is an open condition, there exists some $\varepsilon_0>0$ such that $\omega|_{\xi_n^{\varepsilon_0}}>0$. Therefore $(T^2\times D^2, \omega)$ is a weak symplectic filling of $(T^3,\xi_n^{\varepsilon_0})$ which is contactomorphic to $(T^3, \xi_n)$. This shows that all $\xi_n$ are weakly symplectically fillable.
Eliashberg shows that for $n>1$, $(T^3, \xi_n)$ is not strongly symplectically fillable by contradiction. Recall that if it did have a strong symplectic filling, this filling could be glued symplectically to any symplectic manifold whose concave boundary is $(T^3,\xi_n)$. The idea is to find a symplectic manifold with concave boundary, such that if we could close off this boundary symplectically, we would obtain a symplectic manifold that cannot exist.
Consider $(\mathbb{R}^4, \omega_{std}=dx_1\wedge dy_1+dx_2\wedge dy_2)$. Take the product of circles $x_1^2+y_1^2=1, x_2^2+y_2^2=1$. This is a Lagrangian 2-torus inside $(\mathbb{R}^4,\omega_{std})$. Any Lagrangian submanifold, has a neighborhood symplectomorphic to a neighborhood of the zero section of its cotangent bundle. One can explicitly show that a disk bundle inside $T^*(T^2)$ has convex boundary inducing the contact structure $\xi_1$, using the radially outward pointing Liouville vector field. Therefore the complement of a neighborhood of this Lagrangian torus in $\mathbb{R}^4$ has concave boundary inducing the contact structure $\xi_1$. We can take an n-fold cover of the complement of this neighborhood in $\mathbb{R}^4$ so that on the $T^3$ boundary, the $S^1$ factor parameterized by $\theta$ is covered n times. This cover has a symplectic form given by the pull-back for which the $T^3$ boundary is still concave and the induced contact structure on the resulting boundary is $\xi_n$. This n-fold cover has $n$ ends, each symplectomorphic to the complement of a large compact set in $(\mathbb{R}^4,\omega_{std})$. If it were possible to cap off the concave boundary component with a convex symplectic filling of $(T^3,\xi_n)$, we would obtain a symplectic manifold without boundary, but with n standard ends. A theorem of Gromov states that this is impossible (if someone has a good explanation of the idea of this proof, that might make a good new post that I would appreciate). Thus we have reached a contradiction to the assumption that $(T^3,\xi_n)$ is strongly symplectically fillable for $n>1$.
While this example establishes the inequivalence of weak and strong symplectic fillability, the proof by contradiction and the reliance on a difficult theorem of Gromov which requires holomorphic curve techniques makes it difficult to see what the difference between weak and strong fillability would be in general. A generalization of this example was established by Ding and Geiges. They proved that a more general class of 2-torus bundles over the circle have contact structures which are weakly but not strongly symplectically fillable. Their proof that there are no strong fillings reduces the more general case to the original examples on $T^3$ using contact surgery. While this doesn’t make the underlying cause of non-fillability more clear, it does illuminate the fact that certain surgery operations on contact manifolds preserve different types of fillability and non-fillability.
In particular, Legendrian surgery (surgery along a Legendrian knot, with framing given by the contact framing with one additional negative twist) preserves weak symplectic fillability, strong symplectic fillability, and Stein fillability. The proofs that it preserves strong symplectic fillability and Stein fillability are due to Weinstein and Eliashberg respectively, who show that the convex symplectic or Stein structures can be extended over the corresponding handle attachments which provide cobordisms between the original manifold and the result of Legendrian surgery. The fact that Legendrian surgery preserves weak symplectic fillability is proven here in Theorem 2.3 by Etnyre and Honda, by showing that in a neighborhood of the Legendrian knot, the contact structure can be slightly perturbed so that the weak symplectic filling is a convex filling in a neighborhood of the knot. Then the surgery can be performed to preserve the strong fillability in a neighborhood of the knot, thus preserving the weak fillability of the entire contact manifold.
While we still do not seem to have a full understanding of when strong and weak fillability coincide and when they differ, there are certain situations where weak fillability implies strong fillability. Eliashberg proved the following proposition:
Proposition 4.1 in A few remarks about symplectic filling: Suppose that a symplectic manifold $(W,\omega)$ weakly fills a contact manifold $(V,\xi)$. Then if the form $\omega$ is exact near $\partial W=V$ then it can be modified into a symplectic form $\widetilde{\omega}$ such that $(W,\widetilde{\omega})$ is a strong symplectic filling of $(V,\xi)$.
The idea is essentially to consider the primitive $\eta$ for $\omega$ near the boundary and the contact form $\alpha$, and to interpolate between the primitives $t\alpha$ and $\eta$, so that $\widetilde{\omega} =d(t\alpha)$ near the boundary and $\widetilde{\omega} = d\eta$ a little further in from the boundary so it glues up to the original $\omega$ on the interior of the weak filling. The condition that $\omega$ be a weak filling ensures one can do this interpolation while maintaining the symplectic (non-degeneracy) condition.
A consequence of this proposition is that in order to find a 3-manifold which supports weakly fillable contact structures that are not strongly fillable, the 3-manifold must carry some nonzero second homology. In particular, an integer homology sphere which is weakly fillable is also strongly fillable. This fact comes into play in the next example.
While Eliashberg’s examples distinguishing weak and strong fillings were published in 1996, it took much longer to find examples of contact 3-manifolds which were strongly fillable but not Stein fillable. The technology needed to obstruct Stein fillability was the Heegaard Floer contact invariant. Ghiggini found the first examples in this paper, which were the Brieskorn homology spheres (with reversed orientation) $-\Sigma(2,3,6n+5)$ ($n$ even and $\geq 2$). These manifolds can be understood as 0-surgery on the positive trefoil knot in $S^3$, together with $-n-1$-surgery on its meridian (or alternatively as Seifert fibered spaces over $S^2$ with three singular fibers with coefficients 2,-3,$\frac{6n+5}{-n-1}$).
The 3-manifold obtained from $S^3$ by 0-surgery on the positive trefoil is actually a $T^2$ bundle over $S^1$, because the trefoil is a fibered knot of genus 1. Its monodromy is well understood, and in fact this is one of the 3-manifolds considered by Ding and Geiges, which have contact structures $\xi_n$ similar to those on $T^3$ (the contact structures twist n times in the direction of the monodromy). These contact structures are all weakly symplectically fillable (the filling is a Lefschetz fibration over a disk with tori as regular fibers, and the argument that this is a weak filling is similar to the argument that $T^2\times D^2$ is a weak filling of $T^3$ above).
So we have a weak symplectic filling of 0-surgery on the positive trefoil with any of the contact structures $\xi_n$, but we would like to also do $-n-1$ surgery on a meridian of this trefoil to obtain the Brieskorn spheres of interest. Ghiggini shows that this meridian can be realized as a Legendrian knot whose its contact framing twists $-n$ times around it, therefore performing $-n-1$ surgery on this meridian corresponds to Legendrian surgery. Since the above result said that Legendrian surgery preserves weak fillability, the resulting Brieskorn sphere with its corresponding contact structure $\eta_0$ is weakly symplectically fillable.
Furthermore, because the result of the Legendrian surgery gives an integer homology sphere, a weak symplectic filling can be perturbed into a strong symplectic filling, as mentioned above (since the restriction of the symplectic structure to the boundary is an exact form). Therefore we have strongly symplectically fillable contact structures on these Brieskorn spheres.
On the other hand, these contact manifolds $(-\Sigma(2,3,6n+5), \eta_0)$ are not Stein fillable. Ghiggini proves this by showing that the contact invariant is in the fixed point set of an involution on Heegaard Floer homology, and then studying generators of this fixed point set. Properties of these generators imply that all of them are sent to zero by a map induced by a Stein cobordism between $(S^3,\xi_{std})$ and $(-\Sigma(2,3,6n+5),\eta_0)$. Therefore the contact invariant of $\eta_0$, which is a linear combination of these generators is sent to zero by the map induced by a Stein cobordism, which is a contradiction. (One of the first theorems proved about the Heegaard Floer contact invariant is that it is sent to the generator of $\widehat{HF}(S^3)$ by any map induced by a Stein cobordism from $(S^3,\xi_{std})$, and thus the contact invariant of a Stein fillable contact structure is non-zero.)
More specifically, the generators of the fixed point set can be written as $c(\eta_i)+c(\eta_{-i})$ where $\eta_i$ is the contact structure obtained by Legendrian surgery on $S^3$ as in the picture below, where the Legendrian unknot has $(n-i)/2$ cusps on one side and $(n+i)/2$ cusps on the other side.
Note that each of these contact structures is Stein fillable (since it is obtained from Legendrian surgery), so their contact invariants are non-vanishing. By showing that $\overline{\eta_i}=\eta_{-i}$, Ghiggini proved that when $n$ is even, the fixed point set of the involution on Heegaard Floer homology is generated by $c(\eta_i)+c(\eta_{-i})$. Furthermore, any map induced by a 4-manifold cobordism to $S^3$ sends $c(\eta_i)$ and $c(\eta_{-i})$ to the same element, so in $\mathbb{Z}/2$ coefficients, the image of each generator of the fixed point set is zero.
In conclusion, these examples distinguish these three notions of fillability, but I think there is still a lot left to understand about when such examples can arise. Given an arbitrary 3-manifold, there is not usually much one can say about whether it supports contact structures that are strongly symplectically fillable but not Stein fillable, and unless it is a homology sphere it is hard to tell whether there could be weakly but not strongly symplectically fillable contact strucutres. The Heegaard Floer contact invariant has been a useful probe to obstruct Stein (and with twisted coefficients, symplectic) fillability, so a related question is to understand geometric conditions under which the contact invariant vanishes for tight contact structures (see Honda, Kazez, and Matic’s work for some situations when this occurs). Because there was a significant period of time where it was unknown whether weak, strong, and Stein fillability were equivalent notions, it seems that examples where these notions diverge are rare, but I don’t think we really have any idea how prevalent these examples can be, or what geometric properties can allow this phenomenon to occur.
Filed under Uncategorized
## Morse Homotopy and A-infinity; Part 2
Last time, I describe the first step in Fukaya’s proof that Morse homology has an $A-\infty$ structure, defined in terms of gradient flow trees. This time I’ll describe how the higher relations ($m_3, m_4, \dots$) arise from Morse theory.
To describe $m_3$, we need to look for all trees connecting $x,y,z$ to some $w$. At this level there are some complications. First, not all trees we need to consider will be isomorphic (at least if we fix a cyclic ordering of the exterior vertices). For the $m_2$, every tree was a Y. But at higher levels, we can have nonisomorphic trees, such as the following:
So we need to make sure we look for all possible trees. As we get to the higher $A – \infty$ maps the space of all possible trees gets complicated.
Secondly, we need to start keeping track of the length of interior edges. Each parametrizes a flow line, so we need to know exactly how long the partial flow line is that we want to parametrize. This hasn’t been a problem because up to now, every flow line we considered had at least one noncompact end, because one end was asymptotic to a critical point, so its length was infinite.
We can solve both these problems by realizing that we can form a moduli space of italics(metric trees), an approach originally due to Stasheff. Luckily, if we are careful, the moduli space is just some affine space $\mathbb{R}^k$ for some dimension $k$.
Embed the tree in the unit disk with the exterior vertices (the 1-valent vertices which map to critical points) cyclically ordered along the boundary. Furthermore, assume that no vertex has valence 2, as this corresponds to a broken flow line, which we will consider separately later on. The [exterior] edges will be those connected to the boundary and [interior] edges will be any other edge. Assign a positive real number to each interior edge (the exterior edges are assumed to have length $\infty$). Then the lengths of the interior edges, of which there can be at most $k-3$, where $k$ is the number of exterior vertices, identify parametrize the moduli space and identify it with $\mathbb{R}^{k-3}$. Call this space $\mathcal{T}_k$
For example, suppose $k = 4$, which is the relevant space for the $m_3$. The tree on the left corresponds to $-m \in \mathbb{R}$ and the tree on the right corresponds to $m \in \mathbb{R}$.
The space of metric trees is not compact but it can also be compactified, in the sense that as the length of the interior edge goes off to $\pm \infty$, the tree breaks into two metric trees. For $k = 4$, this is just the union of a two 3-leaf Y trees.
For higher $k$, it will break into two trees with $j$ and $k-j+2$ exterior vertices, respectively. Again, we see the principle that a moduli space can be compactified using the product of lower-dimensional moduli spaces of the same type of object.
Choose four Morse functions $f_1,f_2,f_3,f_4$ such that their differences $f_i-f_j$ are collectively generic. Let $\widetilde{\mathcal{M}}(x,y,z;w)$ denote the moduli of metric trees with 4 exterior vertices parametrizing flow lines of the difference functions $f_i-f_j$ in the following way: Each tree can be thought of as embedding in the unit disk and thus separates the disk into 4 regions. Cyclically label each region with a function $f_i$. Then an edge of a tree parametrizes a flow line of $f_i-f_j$ if it separates the regions labeled by $f_i$ and $f_j$. The trees can be oriented so that every interior vertex has exactly one outgoing edge and exactly one exterior vertex has an incoming edge. Assume that the oriented flows respect this orientation on edges.
The $m_3$ map is defined as follows.
$m_3: C(f_1,f_2) \otimes C(f_2,f_3) \otimes C(f_3,f_4) \rightarrow C(f_1,f_4)$
$m_3(x,y,x) = \sum_{w|[index]} |\widetilde{\mathcal{M}}(x,y,z;w)| w$
In other words, count all rigid trees connecting $x,y,z$ to $w$.
We’d now like to establish the $A-\infty$ relation
$m_3(d(x),y,z) + m_3(x, d(y),z) + m_3(x,y,d(z)) + m_2(m_2(x,y),z) + m_2(x,m_2(y,z)) + d(m_3(x,y,z))= 0$
As with the $m_2$, each term here describes one way a 1-dimensional tree could degenerate. The first three correspond to an incoming flow line breaking:
The last term corresponds to the outgoing flow line breaking:
And the terms involving $m_2$ correspond to an interior flow line breaking:
Again, the relation follows because each possible combination of broken trees can be glued to form the boundary of a 1-dimensional moduli space. Moreover, each 1-dimensional tree must break/degenerate in one of the above ways. Either an exterior edge breaks, which corresponds to the familiar compactification of Morse flow lines, or an interior breaks, which corresponds to the compactification of the moduli of metric trees.
Fukaya notes that this $m_3 relation$ descends to Massey products on cohomology but I won’t go into that here.
The higher $A-\infty$ maps arise in the same way. The map $m_k$ is defined by counting rigid trees with $k$ incoming and 1 outgoing edge. The $A-\infty$ relation follows from the fact that each 1-dimensional tree breaks into two rigid trees.
1 Comment
Filed under Uncategorized
## Morse Homotopy and A-infinity; Part 1
I found it useful to thoroughly go through and understand why the differential in Legendrian Contact Homology squares to 0 and so did some others, so I’m going to continue and discuss where higher A-infinity relations come from. A-infinity things seem intimidating because of all the little details necessary to define them. I hope it helps to have a geometric interpretation. It also helps to open your mind to universal algebra and operads, but I won’t go into that here.
Recall the definition of an A-infinity algebra. Let $A$ be a graded $k$-vector space. Then there exists an infinite family of maps $\{m_k \}$
$m_k: A^{\otimes k} \rightarrow A$
that satisfy the A-infinity relations, which are a nightmare to state. I’ll describe them pictorially as follows. Each $m_k$ can be represented by a box with k strands entering on the top and 1 strand leaving on the bottom, i.e. there are k inputs and 1 output. (Some people use trees to visualize this as well).
Then, we some over all possible ways to combine two of these maps into a map that has $l$ inputs and 1 output:
and require that this sum is 0, (if for simplicity we ignore signs and assume $k = \mathbb{F}_2$). So, we sum over all $i,j$ such that $i + j -1 = l$ and we can move the $m_i$ map left and right so that it takes as inputs any adjacent $i$-tuple.
The simplest condition is that $m_1 \circ m_1 = 0$, since there is only one way to combine two maps in a manner that takes 1 input and yields 1 output.
As a consequence, this means that the $m_1$ map is a differential.
The new two conditions are $m_2 ( d(x), y) + m_2 ( x,d(y)) + d( m_2 (x,y)) = 0$.
and $m_2(m_2(x,y),z) + m_2(x,m_2(y,z)) + d(m_3(x,y,x)) + m_3(d(x),y,z) + m_3(x,d(y),z) + m_3(x,y,d(z)) = 0$
Using Morse theory, Fukaya proved that the cohomology ring of a real analytic manifold is actually an A-infinity algebra. In the case of proving $d^2=0$, we used the fact that each term in $d^2$ corresponds to a union of two flowlines, called a broken flow.
Fukaya studied gradient flow trees. Let T be the tree in figure 1, with 4 vertices and 3 edges in a Y pattern. Now, pick 3 Morse functions $f_1,f_2,f_3$ such that their difference functions $f_i - f_j$ are generic. This means that the (un)stable manifolds of all difference functions intersect transversely. To define the $m_2$ map, we are going to look at the moduli space of flow trees corresponding to Y. That is, each edge will parametrize a flow line of some $f_i-f_j$. Let $\widetilde{\mathcal{M}}(x,y;z)$ denote the moduli space of gradient flow trees from $x,y$ to $z$ for $x$ a critical point of $f_1 - f_2$, $y$ a critical point of $f_2 - f_3$ and $z$ a critical point of $f_1 - f_3$,:
One way to get our hands on this space of trees is to take $W_u(x) \cap W_u(y) \cap W_s(z)$. For each point in this space, there is a unique triple of oriented flow lines connecting it to $x,y$ and $z$. Together, these form a tree of the form we are looking for. The dimension of the moduli space is $I(x) + I(y) - I(z) - n$, which can easily be checked because this is assumed to be a transverse intersection.
Then the $m_2$ map can be defined as
$m_2:C(f_1 - f_2) \otimes C(f_2 - f_3) \rightarrow C(f_1 - f_3)$
$m_2(x,y) = \sum_{z: I(z) = I(x) + I(y) - n} |\widetilde{\mathcal{M}}(x,y;z)| z$
We need to show that this satisfies the A-$\infty$ relation
$d(m_2(x,y)) + m_2(d(x),y) + m_2(x,d(y)) = 0$
In Morse theory, all moduli spaces, of flow lines and trees and of all dimensions, can be compactified. We just need to know how trees/flows degenerate as they head off to the open end. But as always, the principle here is that it can only degenerate into a union of trees you already know about.
For instance, take the Y. Since everything is finite dimensional, any open end of the moduli space must come from an open end of the moduli of the individual flow lines. So degeneration for trees looks exactly like degeneration for flow lines. Any of the three edges could break into pieces.
So, now we have a union of two trees, a segment and another Y. But the segment is just a flow line from $x$ to $w$, and so algebraically shows up in the differential. And the Y is another tree corresponding to an $m_2$ map.
Let’s work the other way. The each of term of $m_2(d(x),y)$ corresponds to a pair of a rigid flow line from $x$ to some $w$ and a rigid tree connecting $w,y$ to $z$. Similarly, each term of $m_2(y,d(y))$ corresponds to a pair of a rigid flow line from $y$ to some $w$ and a rigid tree connecting $x,w$ to $z$. Finally, each term of $d(m_2(x,y))$ corresponds to a pair of a rigid tree connecting $x,y$ to some $w$ and a rigid flow line from $w$ to $z$.
As with the differential, these pairs can be glued together into a 1-dimensional tree and each pair corresponds to the endpoint of some 1-dimensional component of the moduli space of trees from $x,y$ to $z$. Since this 1-dimensional space can be compactified in such a way that if we look at the boundary of all 1-dimensional moduli of trees, we get pairs as in the figure above.
Thus, $d(m_2(x,y)) + m_2(d(x),y) + m_2(x,d(y)) = 0$ and we know that our chain complex is at least an $A_2$-algebra.
Now, we have been using 3 different Morse functions and then three other difference functions. These critical points live in different chain complexes. This is ok. We already know that the chain homotopy type of the Morse complex is independent of the Morse function. So the algebraic structure of the $d$ map is the same in all three chain complexes. So it’s OK to think of this relation as living on a single chain complex
Fukaya also shows how this $m_2$ map descends to the familiar cup product on cohomology. First, recall the chain homotopy equivalence between Morse homology and singular homology. In one direction, the descending manifold of an index $i$ critical point, which is topologically a disk, is a singular $i$-chain, a continuous map of an $i$-dimensional simplex into the manifold $M$. In the other direction, given a singular $i$-chain, its image will (generically) intersect the stable manifolds of an index $i$ critical points in a finite number of points. Summing over all such intersections gives the algebraic image of the chain in the Morse complex. In more suggestive terms, one can think of flowing the image of the singular chain down by the descending gradient flow. The singular chain will “hang” on some of the index $i$ critical points and summing over these points gives the corresponding algebraic chain in the Morse complex.
In Morse cohomology (N.B the differential increases the grading, so we look at rigid, ascending flow lines), the cohomology complex is still generated by the critical points and their Poincare duals can be represented by the singular chains given by their ascending manifolds. Thus, take two basis elements $x \in H^i(M,\partial_{\text{Morse}})$, $y \in H^j(M,\partial_{\text{Morse}})$. These have Poincare duals $X,Y$ which are disks of dimension $n-i,n-j$. The Poincare dual of $x \wedge y$ is $X \cap Y$, which has dimension $n-i-j$. To determine which element this is in Morse cohomology, we flow this intersection upward by the gradient vector field and see which index $i+j$ critical points it gets caught by. In terms of gradient trajectories, we look for all the gradient flow lines from $X \cap Y$ to some critical point $z$ of index $i + j$. There are a finite number of such flow lines, each of which corresponds to a unique tree connecting $x,y$ to $z$. In the chain complex, this is exactly the $m_2$ map and so passing to cohomology we recover the cup product from $m_2$.
Filed under Uncategorized
## Fillings of Contact Manifolds
There are various notions of fillability of a contact 3-manifold, $(Y,\xi)$: weak symplectic filliability, strong symplectic fillability, and Stein fillability. When these notions were initially studied, it was not yet known whether they all coincided. It is fairly straightforward to show that a Stein filling is a strong symplectic filling, and a strong symplectic filling is a weak symplectic filling so
$\{\text{Stein fillable}\} \subseteq \{\text{Strongly sympl. fill.}\} \subseteq \{\text{Weakly sympl. fill.}\}$
However, it took more time to find examples to show that each of these inclusions is strict. Now it seems useful to mention the examples used to prove this in one place. While looking up these examples I noticed there are a few equivalent definitions of each of these notions, which are interchangeably used throughout the literature, so I’ll start by discussing those equivalences. That much seems to fill up an entire post so I’ll keep this post about definitions and equivalences (which make the above inclusions easy to conclude), and next post I’ll summarize known examples that show the inclusions are strict. So this is just an intro post.
A weak symplectic filling of $(Y,\xi)$ is a symplectic manifold $(W,\omega)$ with boundary $\partial W=Y$ such that $\omega|_{\xi}>0$. This definition is only considered for fillings of contact 3-manifolds, since that is when the symplectic 2-form can be positive on the 2-dimensional contact planes. However, the other notions of fillability hold in higher dimensions.
A strong symplectic filling of $(Y,\xi)$ is a symplectic manifold $(W,\omega)$ satisfying certain conditions that can be defined in a few equivalent ways. The first way is to require that there exists a Liouville vector field V defined near the boundary of $W$ ($\mathcal{L}_V\omega = \omega$) which is transverse to $\partial W=Y$ and a contact form for $\xi$ is given by $\alpha = i^*(\iota_V\omega)$.
One can verify any such an $\alpha$ is a contact form (i.e. $\alpha\wedge d\alpha >0$) since
$\alpha \wedge d\alpha = i^*(\iota_V\omega)\wedge d(i^*(\iota_V\omega)) = i^*(\iota_V\omega) \wedge i^*(\mathcal{L}_V\omega) = i^*(\iota_V\omega \wedge \omega)$
($i$ denotes the inclusion map of $\partial W$ into $W$ and $\iota_V$ denotes contraction by the vector field $V$. We have used $\mathcal{L}_V = \iota_V\circ d +d\circ \iota_V$ and $d\omega=0$.) The right most expression is a volume form by nondegeneracy of $\omega$ and the fact that $V$ is everywhere transverse to $\partial W$.
Another way to define a strong filling independently of the Liouville vector field, is to ask that $\omega$ be exact in a neighborhood of $\partial W$ and it has a primitive $\eta$ defined on this neighborhood of the boundary such that $\xi = \ker(i^*\eta)$ and $d\eta|_{\xi}>0$ (i.e. $i^*\eta$ is a contact form for $\xi$).
The first definition of a strong filling implies the second since $\eta=\iota_V\omega$ is a primitive for $\omega$ near the boundary, and the other conditions are satsified since $i^*(\iota_V\omega)$ is a contact form for $\xi$. Conversely, since $\omega$ is nondegenerate, contraction into $\omega$ gives an isomorphism between 1-forms and vector fields so there is a unique vector field $V$ such that $\iota_V\omega = \eta$. Therefore $\mathcal{L}_V\omega = d(\iota_V\omega) = d\eta = \omega$ so $V$ is a Liouville vector field which induces the correct contact form on the boundary because of its relation to $\eta$. If it were not transverse to $\partial W$ at some point then $i^*(\eta\wedge d\eta) = i^*(\iota_V\omega \wedge \omega)$ could not be positive at that point, so $i^*\eta$ would not be a contact form.
Note there is a shorter way of stating the second definition which is again equivalent. We must only require that $\omega$ restricts to an exact form on $\partial W$ such that $i^*\omega = d\alpha$ where $\alpha$ is a contact form for $\xi$. This is because $\alpha$ can be extended to a primitive for $\omega$ in a small neighborhood of $\partial W$ (note such a primitive exists because if $\omega$ restricts to something trivial in cohomology in $\partial W$, then it will represent something trivial in cohomology when restricted to a tubular neighborhood diffeomorphic to $\partial W\times [0,\varepsilon)$).
A Stein filling of a contact manifold, is a complex manifold $(X,J)$ for which there is a strictly plurisubharmonic function $\phi: X\to [0,c]$, meaning the form $\omega_{\phi}:= -dd^{\mathbb{C}}\phi$ is a symplectic form compatible with $J$ (where $d^{\mathbb{C}}\phi = d\phi \circ J$) or equivalently $\omega_\phi(v,Jv)>0$ for all $v\neq 0$. Cieliebak and Eliashberg call these functions $J$-convex in their new book and give a thorough discussion of various aspects of Stein and Weinstein structures. The contact structure induced on the boundary of a Stein filling is the hyperplane field of complex tangencies to the boundary, namely $T(\partial X)\cap J(T(\partial X))$.
A Weinstein filling of a contact manifold is an exact symplectic manifold $(W,\omega)$ such that $\omega=d\eta$ and there is a Liouville vector field $V$ on all of $W$ for which $\iota_V\omega = \eta$ and $V$ is gradient-like for some Morse function on $W$ for which $\partial W$ is the maximal level set. It is clear from this definition that Weinstein fillable implies strongly symplectically fillable. It takes a lot more work to show that Weinstein fillability is equivalent to Stein fillability (see Cieliebak and Eliashberg’s book for a thorough explanation).
There are a few other characterizations of Stein fillings which are often more useful for constructions. Eliashberg proved that in dimensions greater than four, a $2n$-manifold can be given a Stein structure if and only if it has a Morse function with critical points of index at most $n$. Attaching the index $n$ handles is the trickiest part and in dimension four, additional hypotheses are needed. Specifically the 2-handles must be attached along Legendrian knots in the boundary of the 0- and 1-handles with framings on the attaching circles exactly one less than the contact framing. A standard way to keep track of the contact framing in $\partial(\natural_k S^1\times B^3) = \#_k S^1\times S^2$ in a Kirby diagram is given by Gompf here .
Finally if one thinks of contact structures in terms of open book decompositions, a contact manifold is Stein fillable if and only if it is supported by an open book decomposition whose monodromy can be factored into positive Dehn twists. A Stein filling is given by the corresponding Lefschetz fibration over a disk. This correspondence is in this paper by Loi and Piergallini and this paper by Akbulut and Ozbagci.
Discussion of how these notions differ coming soon.
|
2019-06-26 10:20:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 342, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8469877243041992, "perplexity": 239.75937259465667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000266.39/warc/CC-MAIN-20190626094111-20190626120111-00500.warc.gz"}
|
https://codegolf.stackexchange.com/questions/4029/adding-without-using-a-or-sign/6147
|
# Adding without using a + or - sign
There have been many "Do __ without __" challenges before, but I hope that this is one of the most challenging.
## The Challenge
You are to write a program that takes two natural numbers (whole numbers > 0) from STDIN, and prints the sum of the two numbers to STDOUT. The challenge is that you must use as few + and - signs as possible. You are not allowed to use any sum-like or negation functions.
Examples
input
123
468
output
591
input
702
720
output
1422
Tie Breaker: If two programs have the same number of + and - characters, the winner is the person with fewer / * ( ) = . , and 0-9 characters.
Not Allowed: Languages in which the standard addition/subtraction and increment/decrement operators are symbols other than + or - are not allowed. This means that Whitespace the language is not allowed.
• Perhaps this challenge was a lot easier than I thought it would be, especially in other languages, where there are sum() functions. I have to fix this. – PhiNotPi Dec 1 '11 at 0:45
• 100 rep bounty for anybody who can do this in Brainfuck. – Peter Olson Dec 1 '11 at 5:57
• @Peter Olson Well, I guess BF is not turing complete without either + or -... – FUZxxl Dec 1 '11 at 10:21
• Just to clarify, this challenge does not care about code length right? Only the number of +,- and tie breaker characters? ...or do you need to change the rules again :-) – Tommy Dec 1 '11 at 19:39
• @Tommy No, it does not. – PhiNotPi Dec 1 '11 at 21:58
# PHP, 22 chars
echo array_sum($argv); Documentation: array_sum() and $argv
Usage: php -r 'echo array_sum($argv);' 5 6 will output 11. • +/- and above mentioned tiebreakers are interesting, not chars. – user unknown May 30 '12 at 3:35 ## APL (10, 3 tie-breakers) ⎕←⍴(⍳⎕),⍳⎕ • 10x +/-? Chars arn't interesting, just +/- and the tie-breakers. – user unknown May 30 '12 at 3:34 ## Common Lisp Always surprising for such a verbose language. ((lambda(n m)(princ(length(append(make-list n)(make-list m)))))(read)(read)) # Python 3 - 159 characters | ().,*/= count: 39 A little long, but I felt like doing it HDL-style. a,b,c,d=int(input()),int(input()),0,0;l=max(a,b,key=int.bit_length) for i in range(len(bin(l<<1)[2:])): e=a>>i&1;f=b>>i&1;c|=(e^f^d)<<i;d=e&f|d&(e^f) print(c) • +/- tiebreakers are interesting, not characters. – user unknown May 30 '12 at 3:33 • @userunknown: What's wrong with characters? – JAB May 30 '12 at 16:55 • Characters are counted in CodeGolf tagged questions. If a reader doesn't read the question again, he will get the impression that character count is important and probably make a misguided vote based on that assumption. So the rules say: No (+|-). In Tiebreak digits, ().,*/= are important. You should include that number in your answer, so that not every visitor has to count them himself. – user unknown May 30 '12 at 20:41 • @userunknown: Oh, right. – JAB May 30 '12 at 20:46 ## C# 141 characters 27 tiebreakers +/- : 0 = : 0 . : 9 () : 18 Console.WriteLine(Enumerable.Range(1,int.Parse(Console.ReadLine())) .Concat(Enumerable.Range(1,int.Parse(Console.ReadLine()))) .Count()); ## Python 42 a,b=input() print eval("~-"*a+"~-"*b+"0") Seven tie breakers, but those are string operations C#, 12 tiebreakers (), 3 tiebreakers . using System; using System.Data; class Program { static string ReadLine { get { return Console.ReadLine(); } } static void Main(string[] args) { Console.Out.Write( new DataTable().Compute(string.Join("\u002b", ReadLine, ReadLine), "") ); } } # JavaScript 69 My attempt, using bitwise operators for(x=(z=prompt().split(" "))[0],y=z[1];y;)x^=y,y=(y&x^y)<<1;alert(x) # R, 44 bytes nchar(paste(strrep(" ",scan()),collapse="")) Try it online! Converts the two inputs to unary and "sums" via concatenation. Check out the other R answers: # C, score 0 with 8 tie-breakers f(a,b) { return b ? f(a^b, a<<1 & b<<1) : a; } ^ ^ ^ ^ ^ ^ ^^ I've highlighted the tie-break chars. Negative numbers are assumed to be 2s-complement. # Javascript (ES6) - 8 Tie-breaker characters add: { let a = prompt|false; let b = prompt|false; while (a & b) { a = a ^ b; b = [b & ~a][false|false] << true } alert(a | b); } • 2 * ( • 2 * ) • 4 * = # Runic Enchantments, (no +/-, 1 tie-breaker, 11 bytes) 'V2,k!$wi|;
Try it online!
Not allowed to have + characters? Fine, I'll create them myself with math and reflection. V has the byte value 86, which gets divided by 2 (the , and only tie-breaker character) converted to a character, and finally, reflectively written into the program under the instruction pointer. It then reads input, hits a reflector (|), reads input again, then reaches the space originally containing a w and adds them together before printing. The IP then performs an illegal action at , and is silently terminated.
Is this cheating? Probably. But the challenge says "don't use + characters" and the source code contains 0 of them.
# Alternate method, (no +/-, 3 tie-breakers, 16 bytes)
"ab"1,i*}i*qul\$;
Try it online!
Makes two strings, one of length x and one of length y, concatenates them together, then computes the length of the string. Does this count as a "sum-like function"? Perhaps. + on two strings does do the same thing.
# APL (Dyalog Unicode), 4 bytesSBCS, 0 +/-, 0 tie breakers
Can sum any number of non-negative integers.
≢⎕⌿#
Try it online!
# reference to the root object
⎕/ replicate by the number(s) from STDIN
for each number, we get that many copies of the corresponding element on the right, but since there is only one, all numbers get paired up with that one, forming a single list of N1+N2+…+Nn references
≢ tally
Output is implicitly to STDOUT.
# Gol><>, 12 bytes
IITM:{P}?trh
Thanks to JoKing for reminding me there was a Teleport Pad!!
1st version, 25 bytes
II!/M:{P}?\rh
\ :/
Try it online!
This was alot easier than I thought it would be!!! I will be golfing this alot more!!!
• 12 bytes using teleports, 10 bytes using put. Or 13 bytes without tie breaker characters – Jo King Feb 10 at 3:02
• @JoKing Wow. Those use strategies I wasn't even thinking of!!!, nice! – KrystosTheOverlord Feb 10 at 3:44
|
2019-03-20 11:06:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3858267664909363, "perplexity": 2520.533445523472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202326.46/warc/CC-MAIN-20190320105319-20190320131025-00018.warc.gz"}
|
https://polistat.mbhs.edu/blog/ztest_origin_comparison/
|
# Introduction
One of the most prominent problems when we first started looking at averaging polls is that the best data we had to work with was from 2016, data that is known to be problematic. The original assumption was that we would create an exponential function that would intake data on a poll’s age, number and type of respondents e.g., likely vs. registered voters, and the quality of its polling company, then output a weight so that we could create a weighted average of all the polls. We would find these weights by trying out various regressions and finding which did the best job at predicting the outcome of the 2016 election based on the polls of the time. But as those polls were so far off the mark, how could we get any useful data from them? We would have no guarantee that the weights we came up with would be any good once applied to 2020 data.
So we took a step back and considered whether or not we even had to use an exponential function to begin with. We found little correlation between the quality of a poll’s company and the accuracy of its outcome, so we threw out that consideration entirely. We also found that while older polls were sometimes reliable and sometimes not for predicting an election, the very recent polls were almost always the closest. This inspired a new line of thought: what if, instead of performing a regression, we simply referred to the most recent polls? This thinking eventually resulted in the z-test model.
Aside from being a more accurate model with the old data than traditional regressions, the z-test model had several benefits. We no longer had to worry about accommodating for “convention bumps” in the polls, and if an event occurred that changed the political opinions of a significant number of people, our model would better reflect this change because it would not be affected by the out of date polls.
However, we adopted this model with caution. All of the other major political prediction models, as well as the older Oracle of Blair models, use the traditional regression method. And it is fairly counterintuitive that we can gain a more accurate result by completely throwing out information like older polls and reliability ratings.
# Early Implementation and Development
The z-test model started as an excel spreadsheet and the portion of the sheet where we tested the model with Texas data can be seen below. We started by sorting all polls by weeks before the election and averaging the polls in each week. This value can be seen in the column ‘pred’, which shows the predicted 2-party vote percent for Hillary. Then, we copied over the numbers of respondents who favored clinton and the number of two party respondents in the week, so that we could calculate the p-hat pooled value. The p-hat pooled value was then calculated between the last two available weeks and stored in column ‘p-hat pooled’.
We then performed a two-tailed z-test between the current week’s average and the week before its averaging using the p-hat pooled value with the null hypothesis that the two weeks would be the same. The z-score was stored in column ‘z-score’ and the p-value in column ‘p-value’. Then, if p was less than 0.05 the difference between the two weeks was deemed significant.
The confusing part about the spreadsheet is that it was easier to work with values in terms of being not significant, so in column ‘non sig’ you will see FALSE when there is a change between the two weeks. Lastly, we used the judgment in ‘non sig’ to make an adjusted prediction for every week that either used the current week as the prediction (‘non sig’=FALSE) or took a weighted average with the previous week’s adjusted prediction where the current week was weighted twice as much as the previous (‘non sig’=TRUE).
Fig. 1 Excel spreadsheet for Texas with the first version of the Z-Test method
There are a couple interesting things to notice about this primitive spreadsheet version of the model. First is that averages were not rolling. The polls were taken in 7 day blocks that had no overlap of polls. This was a good starting point to test the idea, but could never work for the final version since the model needs to update every day, but the non-rolling polls could only be tested and updated once a week.
The next interesting thing is the averages we were taking for the weeks was actually a pooled value of the week since we simplified our formulas by just taking (Clinton respondents in week x)/(2-party respondents in week x) as the average in that week. This is very much a poor man’s version of the meta-analysis we ended up doing, and also could not have worked in the final model since it ignored the differing errors amongst the polls.
The last interesting thing is that there was a weight on the newer week. The idea behind this weight was that there was a chance that there had been a significant change in the population and that the test should have thrown out previous weeks, but did not (a false negative). To try and account for this we put a double weight on the newest week so that the effects of any false negative would be lessened and that double weights would not hurt true negatives since the population was hypothetically the same between the two weeks. However, the removal of this weight during later iterations had little effect, thus it was deemed unnecessary and potentially problematic, so it was eliminated in the final model.
# Let's compare!
Let’s compare the regression model used in our 2018 Oracle of Blair to our current z-test model. Starting with the regression model, the first step is to first obtain the weights for each of the polls. The traditional method of calculating weights was to use an exponential. In 2018, the poll's relative values of the function: $$f(x) = e^{-t/30}$$ where t is the number of days between the election and the day the poll finished, determined its weight. Therefore, if there are m polls in a district and the ath poll is from $$t_a$$ days before the election, it will have weight $$w_a$$ according the the equation: $$W_a=\frac{e^{-t_a/30}}{\sum\limits_{i=1}^m e^{-t_i/30}}$$
We’ll focus our comparison on two key swing states: Florida and Texas.
The Oracle’s current prediction for Florida is that 49.3% of the two-party vote will go to Trump. The polling average predicted by the regression model is 48.5%, which when averaged with the priors (as described in our methodology section) gives a prediction of 48.7%.
As for Texas, the Oracle’s current prediction is that 50.2% of the two-party vote will go to Trump. If we had used the traditional regression model, the poll average would have been 51% and the final prediction would be 51.34%.
These may not seem like large differences, but for elections this close, a percentage point or two can be a big deal in the chances of winning. For example, if we were using our old model with Texas, we would give Trump almost a 62% chance of winning, a much less close race than we are currently predicting. Similarly, in Florida, we would give Trump less than a 42% chance of winning.
The fact that the Z-test model predicts closer races in both of these states is quite intriguing, especially because it shows a Republican-leaning state shifting to the left and a Democratic-leaning state shifting to the right. This means that the earlier polls excluded by the Z-test model but included by the regression model must be less competitive than the recent ones, suggesting that the race has, if anything, become closer in the past few weeks (at least in these states).
So yes, the change in model does make a difference, if not an extraordinarily dramatic one. Is it perfect? Well, of course there’s still room for improvement. For example, when calculating the portion of the standard deviation due to polling variation, it would be more accurate to count the number of respondents in the polls rather than just the number of polls. But overall, we believe that the Z-test model is a better predictor than a traditional regression model. We’ll just have to wait and see if our confidence was well founded.
|
2020-12-02 18:37:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5298250317573547, "perplexity": 763.5042087922526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141715252.96/warc/CC-MAIN-20201202175113-20201202205113-00516.warc.gz"}
|
http://www.phy.ntnu.edu.tw/ntnujava/msg.php?id=7129
|
The applet for this topic is related to the power loss of an resistor $P(t)=I(t)V(t)=I^2(t) R$ The voltage and current are in phase.
The URL you referer to is another issue:
AC Power due to phase different between current and voltage (RC or RL circuit).
It is a different story!
|
2020-06-06 12:19:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.715082585811615, "perplexity": 1393.418099586552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348513230.90/warc/CC-MAIN-20200606093706-20200606123706-00455.warc.gz"}
|
https://www.projectrhea.org/rhea/index.php/Project_ROAR:Rhea_Online_Testing_Resource
|
# Project ROAR: Rhea Online Testing Resource
Project ROAR was initiated in the Spring semester of 2010, as an attempt to provide students and professors the ability to create online tests; a forum for academic interaction, where professors could test students, and students in turn could post their own questions. The inspiration for this project came from the evident lack of an open source testing platform. Freshmen in the school of engineering, physics and math for example, are required to pay as much as $25 for similar testing software, simply to have access to pre-existing quizzes and tests. The RHEA Development team decided to overcome this problem by conceptualizing an open source web-app based on the Rhea server, that would give students and professors the same ability without the need for these expensive third party software. ROAR can be visited here. ## The ROAR Team ## Documentation • Project ROAR is intended to be a long-term project that starts off with a concept and basic functionality, which can then be expanded by future teams. • This page will serve as the ROAR development team project log, wherein all the documentation and project details may be accessed by future development teams. ## The initial thought process: how ROAR was born This section includes a preview of what the development team initially came up with for how ROAR should look and feel, as well as its functionality. The following PDF includes the thinking behind the name ROAR, as well as a mock web page showing its intended features. ## Tools • Since the eventual goal for ROAR is to be integrated into the MediaWiki platform where RHEA lives, what is needed is a php or javascript based web app that communicated with the RHEA database. • Therefore, for those interested in the project, a working knowledge of the following tools is recommended: • HTML • PHP • CSS • SQL • In addition knowing how to set up and work on a LAMP/WAMP server is very useful. ## Code Repository • Rather than making individual form handling pages, it is much cleaner to organize the web app using functions. Here is how a well organized php file ends up looking like ***************************************** <?php function basic_html(){$html = <<<page_html
<!--------SOME HTML---------------->
page_html;
return $html; } function1(){ } function2(){ } if{(some action required)$html .= functioncall1();
}elseif{(some other action)
$html .= functioncall2(); }else{$html .= basic_html;
}
echo $html; ?> *********************************************** • Thus, the entire program is extremely easy to manage and various actions can be handled separately to display certain html code that can be neatly tucked away within functions. • Now what was given above was a very top level design of how to make a web app. The following section provides code snippets that explain the process in more detail. ## Actions • One of the way php decides what to do with user input data is through actions. • HTML forms, radio buttons, checklists etc. all need a defined action when the data is submitted. The handling action may be a page, or another function within the web page. • Since we described in built functions as a better way of doing things, the method described here will use functions. <form action="index.php?action=doThis" method="post"><input type="submit" value = "Do This"></form> • This code generates a very simple button that says Do This. The form lives inside some html code within the file index.php. • Now whenever the user presses this button, the form passes the value doThis into the form action. • This action value is now available for handling, but to access it, we need the following code: $action = $_GET["action"]; • Once this is done, the value of action lives inside the variable$action. Now this can be used to call a certain function that handles this particular action. Something like
$html .= handleAction($action);
function handleAction($action){ if($action == "doThis"{
$html = some_html_code for the desired action} } return$html
}
• What this means is that everytime the variable action receives the value doThis from the above form, the handleAction function loads
the desired action html code into the variable $html which is returned by the handleAction function. • Once returned the statement $html .= handleAction(\$action);
appends the returned html code to the existing html code hence effectively handling the action without changing the basic page structure.
## User Manual
ROAR currently lives on a test server at web page : msee393mac3.ecn.purdue.edu/roar which is our test page. The web page allows professors to create assessments for their classes and later place links for the created assignments on their web page. To do this, the following instructions apply:
• 1:Click on the link Create New Assignment
• 2:Login with your username and password ******** (At the moment, to test ROAR beta 1.1 we are only giving out the password for creating quizzes to users who want to participate in the Beta testing process. To get the password, please email dlamba above)
• 3:Once logged in, you may use the on-screen menu to create your quiz.
• 4:When finished, you may place a link to the created quiz on the New Assignments page by simply clicking Complete Quiz followed by OK.
• 6:When on your webpage/Rhea page, simply paste the hyperlink to the assessment and this should make a link to the quiz available.
## Semester Summary and Future Goals
The learning curve for making this web application was obviously a big one. While we got a basic page working that allows logged in users to create a quiz, as well as take one of the available quizzes; there were several more features that are summarized below that we would have liked to integrate into ROAR had we more time:
• The ability to have a formula bar to allow users type in symbolic equations into the questions.
• The ability to have a picture upload section in the question making section to allow users to add graphs, pictures and scanned notes
• The ability to have an equation checking algorithm that would let quiz takers enter a formula, in LaTex for example, and check for its correctness.
Last Edit:Dlamba 21:30, 3 May 2010 (UTC)
## Alumni Liaison
To all math majors: "Mathematics is a wonderfully rich subject."
Dr. Paul Garrett
|
2020-07-13 17:08:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2626018822193146, "perplexity": 3006.2138776551965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657146247.90/warc/CC-MAIN-20200713162746-20200713192746-00007.warc.gz"}
|
https://chemistry.stackexchange.com/questions/154010/is-the-lone-electron-pair-of-an-amide-nitrogen-part-of-the-%CF%80-system-when-buildin/154037
|
# Is the lone electron pair of an amide nitrogen part of the π system when building a Hückel matrix?
Is the lone electron pair of the nitrogen atom part of the conjugated π system in the α,β-unsaturated amide pictured below?
Is there a general rule for choosing when to include a nitrogen atom in a Hückel matrix?
• I have edited your post for clarity. At the risk of incurring ill-will from our German folks, I have replaced your "ue" (yes, it's correct; no, most people don't know that) in the title and added back the umlaut in the text. Jul 15, 2021 at 23:14
• Pro tip: get to know a compose key and never fear Unicode German or Greek letters again with simple and intuitive keystrokes like RAlt " u for letter ü or RAlt * p for letter π (cc @ToddMinehardt). Jul 15, 2021 at 23:22
• @ToddMinehardt So to get it right - is or is not ue acceptable EN transcription for DE ü ? ( I guess such transcriptions would get hard time for CZ ones ) // An alternative is to have a short list of frequent UC characters and codes, performing either copy/paste either invoking unicode insertion by the code, like CTRL+SHIFT+U<unicode>ENTER on Linux. Jul 16, 2021 at 8:54
• Actually I am German, so I have the ü right on my keyboard. I have just always read the "ue" spelling in English literature, so I thought I'd not confuse you with umlauts :D Jul 16, 2021 at 9:12
• I guess you can include the lone pair in the Huckel matrix, but it will probably become very difficult to solve by hand. If you have a computer, that will solve it. If you ignore the lone pair, your system is only a linear chain of p-orbitals, it will give that diagonal looking matrix (not a diagonal matrix!!) that you could solve with a pen and paper. Jul 16, 2021 at 10:41
|
2022-07-05 10:04:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5482900142669678, "perplexity": 1734.1890425588097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104542759.82/warc/CC-MAIN-20220705083545-20220705113545-00698.warc.gz"}
|
https://horizons.askdefine.com/
|
# User Contributed Dictionary
## English
### Noun
horizons
1. Plural of horizon
# Extensive Definition
The horizon (Ancient Greek ὁ ὁρίζων, /ho horídzôn/, from ὁρίζειν, "to limit") is the apparent line that separates earth from sky.
More precisely, it is the line that divides all of the directions one can possibly look into two categories: those which intersect the Earth's surface, and those which do not. At many locations, the true horizon is obscured by nearby trees, buildings, mountains and so forth. The resulting intersection of earth and sky is instead described as the visible horizon.
## Appearance and usage
For observers aboard a ship at sea, the true horizon is strikingly apparent. Historically, the distance to the visible horizon has been extremely important as it represented the maximum range of communication and vision before the development of the radio and the telegraph. Even today, when flying an aircraft under Visual Flight Rules, a technique called attitude flying is used to control the aircraft, where the pilot uses the visual relationship between the aircraft's nose and the horizon to control the aircraft. A pilot can also retain their spatial orientation by referring to the horizon.
In many contexts, especially perspective drawing, the curvature of the earth is typically disregarded and the horizon is considered the theoretical line to which points on any horizontal plane converge (when projected onto the picture plane) as their distance from the observer increases. Note that, for observers near the ground, the difference between this geometrical horizon (which assumes a perfectly flat, infinite ground plane) and the true horizon (which assumes a spherical Earth surface) is typically imperceptibly small, because of the relative size of the observer. That is, if the Earth were truly flat, there would still be a visible horizon line, and, to ground based viewers, its position and appearance would not be significantly different from what we see on our curved Earth.
In astronomy the horizon is the horizontal plane through (the eyes of) the observer. It is the fundamental plane of the horizontal coordinate system, the locus of points which have an altitude of zero degrees. While similar in ways to the geometrical horizon described above, in this context a horizon may be considered to be a plane in space, rather than a line on a picture plane.
## Distance to the horizon
The straight line of sight distance d in kilometers to the true horizon on earth is approximately
d = \sqrt,
where h is the height above ground or sea level (in meters) of the eye of the observer. Examples:
• For an observer standing on the ground with h = 1.70 m (average eye-level height), the horizon appears at a distance of 4.7 km.
• For an observer standing on a hill or tower of 100 m in height, the horizon appears at a distance of 36 km.
To compute the height of a tower, the mast of a ship or a hilltop visible above the horizon, add the horizon distance for that height. For example, standing on the ground with h = 1.70 m, one can see, weather permitting, the tip of a tower of 100 m height at a distance of 4.7+36 ≈ 41 km.
In the Imperial version of the formula, 13 is replaced by 1.5, h is in feet and d is in miles. Examples:
• For observers on the ground with eye-level at h = 5 ft 7 in (5.583 ft), the horizon appears at a distance of 2.89 miles.
• For observers standing on a hill or tower 100 ft in height, the horizon appears at a distance of 12.25 miles.
The metric formula is reasonable (and the Imperial one is actually quite precise) when h is much smaller than the radius of the Earth (6371 km). The exact formula for distance from the viewpoint to the horizon, applicable even for satellites, is
d = \sqrt,
where R is the radius of the Earth (note: both R and h in this equation must be given in the same units (e.g. kilometers), but any consistent units will work).
Another relationship involves the arc length distance s along the curved surface of the Earth to the bottom of object:
\cos\frac=\frac.
Solving for s gives the formula
s=R\cos^\frac.
The distances d and s are nearly the same when the height of the object is negligible compared to the radius (that is, h<<R).
As a final note, the actual visual horizon is slightly farther away than the calculated visual horizon, due to the slight refraction of light rays due to the atmospheric density gradient. This effect can be taken into account by using a "virtual radius" that is typically about 20% larger than the true radius of the Earth.
## Curvature of the horizon
From a point above the surface the horizon appears slightly bent. There is a basic geometrical relationship between this visual curvature \kappa, the altitude and the Earth's radius. It is
\kappa=\sqrt\ .
The curvature is the reciprocal of the curvature angular radius in radians. A curvature of 1 appears as a circle of an angular radius of 45° corresponding to an altitude of approximately 2640 km above the Earth's surface. At an altitude of 10 km (33,000 ft, the typical cruising altitude of an airliner) the mathematical curvature of the horizon is about 0.056, the same curvature of the rim of circle with a radius of 10 metres that is viewed from 56 centimetres. However, the apparent curvature is less than that due to refraction of light in the atmosphere and because the horizon is often masked by high cloud layers that reduce the altitude above the visual surface.
horizons in Arabic: خط الأفق
horizons in Asturian: Horizonte
horizons in Aymara: Chhaqachhaqa
horizons in Bulgarian: Хоризонт
horizons in Catalan: Horitzó
horizons in Czech: Horizont
horizons in Danish: Horisont (geografi)
horizons in German: Horizont
horizons in Spanish: Horizonte
horizons in Esperanto: Horizonto
horizons in Galician: Horizonte (xeografía)
horizons in Korean: 수평선
horizons in Croatian: Obzor
horizons in Bishnupriya: হোরিজোন্টে
horizons in Indonesian: Horizon
horizons in Icelandic: Sjóndeildarhringur
horizons in Italian: Orizzonte
horizons in Lithuanian: Horizontas
horizons in Hungarian: Horizont
horizons in Malay (macrolanguage): Horizon
horizons in Dutch: Horizon
horizons in Japanese: 地平線
horizons in Norwegian: Horisont
horizons in Polish: Horyzont
horizons in Portuguese: Horizonte
horizons in Romanian: Horizonte
horizons in Quechua: Pachapanta
horizons in Russian: Горизонт
horizons in Albanian: Horizonti
horizons in Simple English: Horizon
horizons in Slovak: Obzor (Zem)
horizons in Slovenian: Obzorje
horizons in Serbian: Хоризонт (астрономија)
horizons in Serbo-Croatian: Horizont (astronomija)
horizons in Swedish: Horisont
horizons in Thai: ขอบฟ้า
horizons in Turkish: Ufuk
horizons in Chinese: 地平線
|
2018-01-17 10:36:28
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8192968368530273, "perplexity": 1559.4411350426353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886895.18/warc/CC-MAIN-20180117102533-20180117122533-00142.warc.gz"}
|
https://www.physicsforums.com/threads/egg-drop-experiment-with-a-twist.683413/
|
# Egg drop experiment with a twist
Hello,
My physics class is doing the classic egg drop experiment, but with a different twist.
We are asked to find the length of the bungee cord needed given a mass and a height to drop it from. The success of the lab is determined by how close we can get to the ground without touching.
We are using a single bungee cord, and not multiple bungees.
This is a common question here, but I havent been able to determine the equations to use to determine what I need.
I have tried using U1 + K1 = U2 + K2 +Uspring
Where U1 is the initial potential energy (mgh1)
K1 is the initial kinetic energy (K1 = 0)
U2 is the final kinetic energy (mgh2 with h2 being the closest distance off the ground)
K2 is the final kinetic energy (K2 = 0)
And Uspring is the integral of Force with respect to distance F(x) from 0 to the unstretched bungee length - h2
Our F(x) function is 2.823 - 6.696x + 31.64x^2 - 110.9x^3 + 177.3x^4 - 103.3x^5
This is the characteristic that our bungee follows.
Using this, I get the correct value, if the egg wasn't dropped, but rather if the egg was just hanging there.
My question is, what am I doing wrong? and what equations/principles could I use to determine the length of the string that I need?
Related Introductory Physics Homework Help News on Phys.org
Should I define it from the unstretched length to the height off the ground? I know that the integral gives me the work done by the bungee, so it would make sense, but I am out of the lab now, and have no way of testing this.
I would imagine you have x defined as the extension of the string, but I don't really know where your equation for F(x) has come from so I can't tell if this is the case or not.
Do you know the modulus of elasticity, $\lambda$, of the string? If yes then you can just use
$E.P.E = \frac{\lambda e^2}{2l}$
where e is the extension. This would be a lot easier and you woudn't need to use an integral.
If you don't know $\lambda$ then assuming your equation for F(x) is correct with x as the extension, you would want to integrate over the entire extension, i.e. x=0 to the distance from where the string first goes taut to the bottom, $h_1 - h_2 - l$, where l is the natural (unstretched) length of the string. Draw a diagram to help you see this. :)
Thank you. I will try this and see if it works. Unfortunately I dont know the modulus of elasticity, but I will try integrating it :)
|
2020-09-28 16:03:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6310921907424927, "perplexity": 363.45667583172724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401601278.97/warc/CC-MAIN-20200928135709-20200928165709-00378.warc.gz"}
|
https://www.growingiq.com/dlt-collection/Ms.-Kayla/level5/friday/795e3a9f-5141-4787-be39-5ac0e976107e
|
## Ms. Kayla
### Target 1
###### Lesson Type:
Review
Critical Thinking
:
Logical Reasoning
Use reasoning to solve logic puzzles.
###### 1:
Use the given information in a problem to identify the correct answer.
###### 2:
Follow given rules to solve problems.
5th
###### Vocabulary:
Theory, Pattern, Sequence, Rule, Rotate
Activities:
1. Student had to move the objects to rotate the image 180 degress.
2. Student then worked backwards to determine a pattern and worked on additional puzzles to test out the pattern theory.
### Target 2
:
###### 1:
Utilize the standard algorithm to divide fractions.
###### 2:
With two fractions, find the reciprocal of the 2nd fraction, replace the ÷ with x, multiply numerators and denominators, simplify.
###### 3:
With a whole number by a fraction, convert the whole number to an improper fraction, find the reciprocal of the 2nd fraction, replace the ÷ with x, multiply numerators and denominators, simplify.
###### 4:
With a fraction by a whole number, convert the whole number to improper fraction, find the reciprocal of the 2nd number, replace the ÷ with x, multiply the numerators and denominators, simplify.
5th
###### Vocabulary:
Mixed Number, Improper Fraction, Division, Multiply, Convert, Simplest form, Numerator, Denominator, Inverse
Activities:
1. Student practiced dividing and multiplying fractions that involved both mixed and improper fractions.
2. Student was taught to use multiplication (inverse) to solve a problem that required the division of fractions.
3. Student was shown a different method of reducing a fraction to its simplest form by divisibility of either the numerator or denominator of the corresponding fraction.
### Home Exploration
3 2/3 divided by 7/8
What would be your first step to solve this problem?
What is the inverse of division?
:
Activities:
© GrowingIQ Inc.
|
2021-03-05 00:58:39
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9098962545394897, "perplexity": 2673.9644881376535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369553.75/warc/CC-MAIN-20210304235759-20210305025759-00248.warc.gz"}
|
https://www.gamedev.net/forums/topic/688293-the-processworkflow-of-designing-game-characters/
|
The Process/Workflow of Designing Game Characters
Recommended Posts
Posted (edited)
Hey All,
I just joined today and I've been poking around at a bunch of different threads picking up whatever tidbits of knowledge I can, but I figured the fastest way to get direct answers is to just post my questions. And if these questions have been answer before, you can feel free to yell at me and I'll search harder next time.
I'm very new to this whole game design thing, especially on the art side since my formal background is as a software engineer - I picked up game stuff as a hobby just a month or so ago and I've been messing around with Unreal Engine 4 and Unity. I thought it'd be a fun and a cool way to spend my free time, but I've quickly come to the realization that I'm diving into a gigantic universe with basically endless information to learn... it is fun, but it's also all kinds of stressful when you're a perfectionist like me. I'm crying just thinking about how much there is to do and how little I know to design the game of my dreams.
But I guess the first step is to always just learn as much as I can, so here goes my list of questions:
• From what I've seen poking around, I hear that some people design characters via sculpting with ZBrush from a base mesh or some other programs and then convert their high-poly meshes to low-poly meshes for use in-game. That sounds really efficient to me for designing unique characters - is this a common workflow in industry?
• How do games achieve the illusion of having a ton characters/people (NPCs)? I understand that characters have "skins" that are interchangeable. Is this simple a swapping of textures on the same model? Is it an actual change of the entire polygonal mesh?
• In theory could you apply the same animations to multiple skeleton rigs of varying sizes? I'm under the (possibly false) impression that you can have different skeleton rig sizes as long as the joints are the same, right?
• One of the big concepts I'm trying to achieve in my game is hyperrealism in terms of movement for characters. Is it honestly possible to animate all of the movements using stuff like Poser or is it really better off mo'capped, like I've seen in industry game studios?
• With brutal honesty, how much can one achieve as a solo developer/designer? Is it just a matter of time, or are some goals completely out of reach for a single individual?
Thanks for your time guys. I'm looking forward to building stuff over the next several years.
Edited by mikeyvxt
Share on other sites
Posted (edited)
Hey All,
I just joined today and I've been poking around at a bunch of different threads picking up whatever tidbits of knowledge I can, but I figured the fastest way to get direct answers is to just post my questions. And if these questions have been answer before, you can feel free to yell at me and I'll search harder next time.
I'm very new to this whole game design thing, especially on the art side since my formal background is as a software engineer - I picked up game stuff as a hobby just a month or so ago and I've been messing around with Unreal Engine 4 and Unity. I thought it'd be a fun and a cool way to spend my free time, but I've quickly come to the realization that I'm diving into a gigantic universe with basically endless information to learn... it is fun, but it's also all kinds of stressful when you're a perfectionist like me. I'm crying just thinking about how much there is to do and how little I know to design the game of my dreams.
But I guess the first step is to always just learn as much as I can, so here goes my list of questions:
• From what I've seen poking around, I hear that some people design characters via sculpting with ZBrush from a base mesh or some other programs and then convert their high-poly meshes to low-poly meshes for use in-game. That sounds really efficient to me for designing unique characters - is this a common workflow in industry?
• How do games achieve the illusion of having a ton characters/people (NPCs)? I understand that characters have "skins" that are interchangeable. Is this simple a swapping of textures on the same model? Is it an actual change of the entire polygonal mesh?
• In theory could you apply the same animations to multiple skeleton rigs of varying sizes? I'm under the (possibly false) impression that you can have different skeleton rig sizes as long as the joints are the same, right?
• One of the big concepts I'm trying to achieve in my game is hyperrealism in terms of movement for characters. Is it honestly possible to animate all of the movements using stuff like Poser or is it really better off mo'capped, like I've seen in industry game studios?
• With brutal honesty, how much can one achieve as a solo developer/designer? Is it just a matter of time, or are some goals completely out of reach for a single individual?
Thanks for your time guys. I'm looking forward to building stuff over the next several years.
1) ZBrush is pretty much an industry standard. From personal expierience I'd say there are alternatives just as good, but cheaper, if you want to get into that kind of thing. There is also Blender as a free alternative, not as good, but usable.
But first things first.
ZBrush is MOSTLY used for sculpting high-poly models that then get baked onto low-poly retopo meshes. ZBrush usually is used with a base mesh created in a boxmodelling 3D App like Maya, Max or Blender. There are tools and workflows to create the base mesh in ZBrush itself, but the oldschool way of creating the base somewhere else is still often found. Also, oftentimes base meshes are kept and reused from project to project.
Efficient - Define that word before using it for describing the AAA studios art pipeline of choice. Which ZBrush mostly is. MOST Indie studios will NOT have ZBrush in their pipeline, at least not for high-poly scrulpting, BECAUSE high-poly sculpting is not a very efficient way to create 3D models... especially when you are creating a simple low-poly look, like many Indie games do (which do it just as often because they lack the resources to create beautiful AAA style high-poly art for a full game as they do it as an artistic statement).
Now, if you like high-poly 3D art, I am the last person trying to tell not to touch it with a ten foot pole. Because creating high-poly art can be a ton of fun, especially after you have spent many years to get a certain degree of mastery on the tools and develop your artistic skills.
But just be warned that you will have a rough time trying to create ALL the art for a full game, no matter how small that game is, in true AAA high-poly fashion, and you will need a certain degree of artistic skill, with simpler art styles often letting you get away with less artistic skill.
And that is before getting into how retopology, rigging and animating very detailled models is even more of a pain, and demanding a whole new skillset to be developed again.
2) Most of the more detailled "skins" nowadays are different 3D models that can be mapped to the same rig and use the same animations as the original 3D model (which in turn means the proportions have to match more or less, which of course is also a necessity for gameplay reasons).
If you can share rigs and animations between models you can save a ton of time, and you can go a step further and create modular models (which is how some character designers work, apart from morph targets / blend shape / shape key systems that actually morph the mesh between different end states).
Most of the time NPCs in crowds are much simpler 3D models than NPCs you find standing around on their own, which again are simpler than important story NPCs, which will never be of the sam quality as your Player characters. Just look at different characters in your favorite RPG and see how much less gubbinz an unimportant NPC often has on the model compared to Geralt or whoever your Player character is... which makes sense storywise of course, but is also simplyfing modelling and animating that NPC compared to your player character.
Mostly the illusion is making clever use of the limited resources you can spend on your NPCs... and if we are talking about AAA games, even limited resources for that game being a ton of money :)
3) Not the most knowledgeable person when it comes to animation retargetting, but AFAIK that is the term you are looking for. You are trying to retarget an animation to a different skeleton.
4) Hyperrealism is Hyperexpensive... and we are talking about tens of millions of USD here, not just some grands. But sure, go for it.
Mocap is a good solution to achieve the same result a more skilled animator could do by hand. It can also be a faster way of achieving the same results given your Mocap setup is good, and everyone is well trained in it. Just be aware that most low cost mocap alternatives are... suboptimal, to say the least. The like of iPi and similar systems working with multiple cameras can be a bitch to setup, need a lot of room, and can need a lot of postprocessing or reshooting as the result often is riddled with noise.
I have seen a very promising low cost setup using accelerometers and I guess bluetooth units strapped to the limbs to capture the data. Created by some chinese company, sold for <5000$, it was showing promising results for a reasonable cost. But before pondering about that, you need a good rig. And if we are talking about Hyperrealism, you will spend quite some time setting up that rig. After all, what good is the best animation if the Knee is bending at the wrong place, or the model is not deformed right with blend shapes / shape keys when a joint is bent (shape keys are often employed to work around the limitations of simple weight painted deformation of a skinned mesh)? What good is it when the clothing is not also animated nicely (often just slapping a cloth physics setup on it will give you suboptimal results or costs too much performance), of the face looks like a death mask because no matter how many bones you throw in there, you will not get realistic results without shape keys and wrinkle maps? Another thing is that some animations just cannot be done with mocap. When your hero needs to be larger than life, you will either struggle to find an actor that can stretch its limbs enough or do all the backflips and trickery, or you will need to do a lot of manual retouching after mocap anyway. So learning to animat models the manual way IMO is not a bad idea before getting your feet wet with mocap. 5) Depends on your timeframe. In the usual 2-4 year timeframe, with a single dev? Not much if you are aiming for AAA visuals and scope. Especially not when also working a day job or part time (which, unless you already have some games that rake in a lot of money, or live in your parents basement, is a necessity for most devs, even the ones that want to pay their bills from their game development), and not being an industry veteran with many years of expierience. My recommendation would be to get your feet wet, learn the tools of the trade while also getting a reality check as to how long some things take, while working on very, very small projects if you need that to motivate you. You can always tackle a much bigger project some years down the line when you know what you can achieve in what time, and are ready to spend 5-10 years of your life on a slightly larger project (that is still far away from AAA quality), or are ready to go out and find a team to join (just be aware that its hard to find people working on YOUR ideas for free), or spend some of your money on external help (which, given you have enough cash you can burn is the best idea probably... just make sure you are ready to loose all that money, game development seldom is giving you a nice RoI unless you can spend big or spend wisely) EDIT: oh, about the ZBrush alternative I was talking about. I am using 3D Coat mostly, which CAN be just as usable as ZBrush as a sculpting tool, while being much better as a painting and retopo tool and costing half as much as ZBrush. ZBrush does outdo it in some areas in sculpting (mostly some specialist Brushes which can speed up your work if you have learned how to use them), and doesn't need a beefy GPU to run as most things are running on the CPU while 3D Coat pushes a lot of things to the GPU, but in turn you get a dated, obscure UI in ZBrush that will give you nightmares if you do not use ZBrush every day, whereas 3D Coat has a more friendly, more intuitive UI. I ended up spending on a license for both of them, but tend to use 3D Coat over ZBrush most of the time unless its something where the speedup by a specific ZBrush Brush is making up for the headache of getting into the ZBrush UI again. Which to date is mostly sculpting rocky surfaces from scratch. That "adaptive trim" brush in ZBrush is just a godsend for that. Edited by Gian-Reto Share this post Link to post Share on other sites Hey Gian-Reto, Thanks for the in-depth response and the reality check. I'm excited and ambitious, but I'm also fully aware that this is a huge universe I'm diving into. I don't expect to put out A+ games on a yearly basis or anything, and I'm also fully aware of my own limitations and skills as an individual. I'll definitely take your advice and get my feet wet, and I'm not about to let the vastness of the field discourage me from trying. All of this stuff looks awesome. Efficient - Define that word before using it for describing the AAA studios art pipeline of choice. Which ZBrush mostly is. MOST Indie studios will NOT have ZBrush in their pipeline, at least not for high-poly sculpting, BECAUSE high-poly sculpting is not a very efficient way to create 3D models... especially when you are creating a simple low-poly look, like many Indie games do (which do it just as often because they lack the resources to create beautiful AAA style high-poly art for a full game as they do it as an artistic statement). I guess to define "efficient", I suppose I mean the ability to put out the highest quality models/art as fast as possible. For AAA studios with tons of resources, Zbrush High-poly design -> retopologize to low-poly probably works (and it looks good). But you mentioned, Zbrush doesn't really show up in an Indie Studio's pipeline. So what do they do differently? Is it just different software or do they use an entirely different workflow/pipeline? At some point they'll still have to do 3D modeling to throw into the game right? Share this post Link to post Share on other sites Posted (edited) Gian-Reto, said most of what was needed, there is only one thing I want to stress. Making 3D models and Animating, are two completely separate fields. Because mastering any skill takes a live time, most people focus on one skill while only lightly dabbling in a other. What that means is a 3D artist can make realistic characters, while only knowing the basics of animations. Animators can create realistic animations while knowing only the basics of modeling. The time it takes a 3D modeler to make his first realistic human is around 4-6 years after making there first model. This is 4-6 years of extreme 3D workouts, spending more than 12 hours a day modeling, it's at this point that the modeler is considered professional. Even after that time the modeler still has to spend hours modeling, just to retain what they know. There isn't a single 3D modeler that I know who doesn't start the day with a warm up model. You can expect that animation is just as strict to master, that is why it takes a team to make a single character from start to finish. I guess to define "efficient", I suppose I mean the ability to put out the highest quality models/art as fast as possible. For AAA studios with tons of resources, Zbrush High-poly design -> retopologize to low-poly probably works (and it looks good). But you mentioned, Zbrush doesn't really show up in an Indie Studio's pipeline. So what do they do differently? It's all about cost, making a zBrush model and retopologizing is expensive. Doing a character of that level will take over a week for just the mesh. When doing contract work for AAA developers, I charge$2000-$3000 USD for the first mesh, then charge around$800 for every change. The total cost ends around $40 000 -$60 000 USD, depending on how complex the model is.
As the quality of the model scales so does every thing else, a 60 000 triangle model needs motion capture; using simple hand animations will cause the model to look stiff. Once motion capture is used you need to pay an actor, a camera crew, rent some motion capture suits and hire a animator.
I am a professional 3D modeler, even in my own games I stick to low quality models. At best one of my own game models will sell around $1200 if it's a full IP agreement. I am only using zBrush as part of the planing, to quick sketch a model. Indie developers have a lot of limitations, just the license cost of zbrush for a small team is more than$8000 USD, that isn't even with the cost of having 3DS Max, Photoshop, Quixel, Simplygon and SpeedTree.
So no indie team will aim for that level of quality.
Edited by Scouting Ninja
Share on other sites
Okay, so let me try to put things into perspective here.
This is a character I modelled and animated last year. I wanted to see how quickly I could create a character if I stayed with the minimal amount of details needed for a certain view. Also my first time rigging and animating a character.
Took me 8 hours for the high poly sculpt (which was about as detailed as you can still see on the gif above, character was planned for a "commandos" style game, where character can be more detailled than in an RTS, but are never seen up close).
Took me about 8 hours for low poly retopo and the textures (well base ones, I added a subsurface texture later which took another 30 minutes to create).
Rigging and creating 2 animations by hand took me about 35 hours. But as said, first animation project, and potentially to complicated with all the stuff hanging off the belt, as well as clothing animated with shape keys / blendshapes.
Character is pretty usable now. I would have to go back and redo part of the rig though because I didn't combine all the meshes before creating the shape keys / blendshapes. And of course a ton of additional animations would be needed, which because of how the rig is setup, are not as easy to do as I would have hoped. I had a ton of troubles with getting the rig to stick to the two handed weapon without the hands moving too much when the arms are moved, so I would have to retouch every animation frame by hand to make sure the hand does not move around on the gun.
Not to bad for a first try, still, how many characters do you need for a game? This would quickly become a fulltime job on its own! And that is just for rather undetailled NPC characters suitable for RTS or maybe a commandos style game, but not an FPS. Add a little bit more details and a more deliberate design phase for a Player character, and you will struggle to finish high poly modelling that in 16 hours. Add the details needed for an FPS, and you can probably quadruple the amount of hours spent on high poly modelling.
Textures were mostly flat colors, using baked maps and dirt maps to enhance them. Also I have quite the pipeline by now to bake out all the needed PBR maps. If you want to go more handcrafted, you can again double and triple the time needed, especially for a more detailled FPS ready sculpt.
My time needed on rigging and animation most probably are so high because of my noobishness... I was looking up stuff as I went along. And the rig wasn't the most straight forward with the twohanded weapon, animated clothing and bags / grenades on the belt. Still, again, if you want to go closer with the camera to the character, you will get away with less things. I first was convinced I needed wrinkle maps for the clothing to look nice. Turns out no, at that distance you can hardly see that the folds and wrinkles on the clothing is pretty static.
When looking at clothing from close up, and wanting hyperrealism, you might want spent time on that. No point in having this nicelly modelled character when the clothing looks stiff and lifeless.
Last thing to say is, I am by no means a pro. I have some artistic skill I think, and spent the last few years doing quite some 3D projects in my free time, but I am a programmer in my day job, so clearly someone who did this for a living could slash the time needed in half, especially when already having a more efficient art pipeline and bits and bobs to kitbash with (only one I used here was the gun, which was made from an existing gun souped up a little bit to fit the character).
As you see, its a combinatorial explosion the higher you move up the quality meter. Even the pros spend months on some of the AAA Characters alone. 3, maybe 6 months is not uncommon for a very detailled model. Sure, with expierience you get faster, and you can improve your pipeline with tools that let you be more productive.
The most important factor though is to limit your scope. Moving down a notch on the quality might slash the time needed to create the character not only in half, but maybe down to a quarter or so. And for certain types of games, that quality might be completly acceptable, or even not distinguishable from a higher quality character. For example thirdperson cameras that do not let you get close to any of the characters can let you get away with far less details.
Art Style is often not chosen just for a certain effect on the player, but also to keep the hardware from being fried by polygon overload, and most important to keep the load on artists lighter.
This is why Indie games usually go for a simpler art style, if they have to deliver the same amount of game with a fifth or tenth of the artists, no amount of pipeline optimization and procedural generation can prevent them from making some hard decisions to lower the visual quality a notch or two.
Create an account
Register a new account
• Forum Statistics
• Total Topics
628662
• Total Posts
2984105
• 10
• 9
• 9
• 10
• 21
|
2017-12-12 17:41:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.197762131690979, "perplexity": 1322.2092907544804}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517845.16/warc/CC-MAIN-20171212173259-20171212193259-00688.warc.gz"}
|
https://blog.computationalcomplexity.org/2023/01/martin-davis-passed-away-on-jan-1-2023.html?m=0
|
## Monday, January 09, 2023
### Martin Davis Passed Away on Jan 1, 2023
As you probably already know from other sources, Martin Davis passed away on Jan 1, 2023, at the age of 94. His wife Virginia died a few hours later.
He majored in math at Brooklyn College and graduated in 1948.
He got his PhD under Alonzo Church at the Princeton in 1950.
Two years for a PhD seems fast!
His Phd was on Recursion theory.
In it he conjectured that Hilbert's tenth problem (below) is undecidable.
He is known for the following (if you know more, please leave a comment).
1) Hilbert's Tenth Problem.
Hilbert posed 23 problems in the year 1900 for mathematicians to work on
over the next 100 years. Hilbert's tenth problem, in modern terminology, was
Find an algorithm that will, given a polynomial p \in Z[x_1,...,x_n],
determine it has a Diophantine solution (that is, a_1,...,a_n\in Z
such that p(a_1,...,a_n)=0).
In Hilbert's article he did say in a preface to all of the problems. Here is the exact quote:
Occasionally it happens that we seek the solution under insufficient hypotheses or in an incorrect sense, and for this reason do not succeed. The problem then arises: to show the impossibility of the solution under the given hypotheses or in the sense contemplated.
Hilbert had hoped that this problem would lead to deep results in number theory. And it has to some extend. However this went from being a problem in number theory to a problem in logic. That might not be quite right: the result did use number theory.
In 1961 Davis-Putnam-Robinson showed that the problem is undecidable IF you also allow exponentials. This may have been a turning point for the conventional wisdom to shift from Probably Solvable' to Probably Unsolvable.'
Martin Davis predicted that the H10 would be shown undecidable by a young Russian by the end of the decade. He was correct. Yuri Matiyasevich did indeed prove H10 undecidable in 1970. By all account Davis was delighted. When the result is cited usually all four people are credited which is as it should be. He wrote an excellent exposition of the complete proof from soup to nuts in:
Hilbert's tenth problem is unsolvable, American Math Monthly, Volume 80, No 4, 233-269.
When I first heard of this result I assumed that the number of variables and the degree to get undecidability was huge. I was wrong. I wrote a survey of H10 emphasizing what happens if you bound the degree and the number of variables, see here
2) SAT Solvers. Davis-Putnam-Logemann-Loveland outlined a SAT Solver, or really a class os SAT Solvers. While I doubt it was the first SAT Solver, it was an early one that tried to cut down on the time needed.
3) He wrote the following books:
Computability and Unsolvability (1958, reprinted 1982)
Applied Non-Standard Analysis (1977, reprinted 2014)
Computability, Complexity, and Languages: Fundamentals of Theoretical Computer Science (1994 (with Elaine Weyuker)
The Universal Computer: The Road from Leibniz to Turing (2000, reprinted as
Engines of Logic: Mathematicians and the origin of the computer)
The Undecidable: Basic Paper on undecidable propositions, unsolvable problems and computable functions (2004)
4) He was a recursion theorist who could actually program a Turing Machine to really do things. There are still some people who do that- getting the UTM down to a small number of states, and the work (the Wolfram Challenge), and the Busy Beaver Function (see Scott Aaronson's open problems column here,
but I think this is done less by academic recursion theorists than it used to be. I do not have my students in Automata Theory write any TM code. Clyde Kruskal, who had that same course from Martin Davis, thinks that I should.
4) For more on Martin Davis see this obit here and/or his Wikipedia page here
1. Thank you for this news. I for one did not know it. Very sad. (By the way, for some reason in the first half the formatting is that every sentence is its own paragraph.)
Thank you also for mentioning "The Universal Computer: The Road from Leibniz to Turing." One of his contributions that may be undervalued is the ability to understand exactly what is the point at issue, and to clearly state it, so that others of us can also understand that. In my Theory class I urge that book on my students.
2. Your dates for Davis's undergrad and PhD degrees seem to be wrong. According to your own wikipedia link, he got his bachelor's degree in 1948 and his PhD in 1950 (in 2 years, not 3). These dates (and the 2-year PhD) also seem to be confirmed in this interview (https://www.ams.org/notices/200805/tx080500560p.pdf).
3. Also he got his PhD at Princeton, not Chicago.
4. I would like to mention to his credit, three more biographical things:
1. He is the one who named it the "Halting Problem" from his 1958 text. Turing originally called these things "circular", "circle-free", and "satisfactory".
2. He wrote "Solvability, Provability, Definability: The Collected Works of Emil L. Post", which includes several pages of a kind of obituary or biography of Emil Post. He mentions what it was like as an undergraduate taking a class with Post.
3. He wrote the foreword to "Hillbert's 10th Problem" by Matijasevich (1993), In there, he describes his history and contribution to solving the problem.
5. As an older bloke who learned assembler early on (1971: IBM 1130, 1972: CDC 3600, 1973: PDP-6), I worry that younger comp sci. folks may not have good intuitions about what computation is about, so count me in as thinking theory courses should insist that students write Turing machine programs.
6. The 1st Edition of Computability, Complexity, and Languages: Fundamentals of Theoretical Computer Science is 1983 (not 1994). The 2nd edition (1994) has Ron SIgal as a co-author.
|
2023-04-02 03:54:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7231155037879944, "perplexity": 1291.6621456904131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950373.88/warc/CC-MAIN-20230402012805-20230402042805-00310.warc.gz"}
|
https://www.physicsforums.com/threads/desperate-for-help-with-job-hunting.883020/
|
Physics Desperate for help with job hunting
1. Aug 25, 2016
highc23366
I have my BS in physics and am looking for anything , anything that i may be qualified for above 35k a year. I worked too hard to make $10-20 an hour and want something viable that i can excel at. Does anyone have any job hunting tips? job titles that i may be qualified for? etc ive been working retail for a year post graduation ... its miserable, and i cannot find a job / internship for the life of me, i've been looking so hard.... please don't suggest getting my masters or phd in physics, i have 0 interest, as it gets way too abstract for my tastes. i really wish i just chose engineering as my major.... sigh.....my little brother is making$60k a year as a civil engineer as a fresh graduate in arizona; however, i chose physics because i love it
2. Aug 25, 2016
Nidum
What skills do you actually have that would be useful to an employer ?
3. Aug 25, 2016
StatGuy2000
To the OP:
Following up on what Nidum had spoken, during your BS program, have you ever pursued internship or research opportunities while in school? Do you have any programming knowledge or experience? Do you have knowledge of statistics?
On a separate note, how have you been searching for work? Do you have a LinkedIn profile? Have you networked (perhaps contacting your old professors, past graduates from your program, attending local conferences, maybe even speaking to your brother in Arizona, etc.)?
You need to understand that employers are primarily concerned about what specific skills a potential candidate possesses that would be of benefit to them. You need to make the case that you possess the skills that are in demand. Depending on the answers you provide above, it doesn't sound like you have developed any marketable or immediately employable skills during your school, in which case you need to develop these somehow.
You state that you're not interested in a MS or PhD in physics. If you are absolutely dead set against further graduate studies in physics (including more practical fields like medical physics), then I would suggest a terminal masters program in another, cognate field that have solid employment prospects like computer science, statistics, or electrical engineering (it shouldn't be too difficult for a physics graduate to pursue such fields). Barring that, perhaps training programs offered at community colleges (e.g. X-ray or MRI technicians -- these would be especially useful for someone with an understanding of physics) may be of benefit of you.
4. Aug 25, 2016
gleem
Check out this article on the American Institute of Physics website. I lists employers in each state who have recently hired person with BS degrees in Physics. Find out who have job openings and what specifically they are looking for in skills.
5. Aug 25, 2016
Choppy
1. Professional Presence
I know this may seem trivial, but the typing in your post is informal and does not use proper sentences, capitalization, or grammar. It does not reflect the abilities of someone who has successfully completed a bachelor's degree in physics. Most employers will do at least a quick online search of any person they are seriously considering for a position. If you're hunting for a job, to the extent that you can control it, you want to try to present yourself online in a manner consisted with that which you are presenting to potential employers. This means everything from typing properly to cleaning up your Facebook or other social media pages.
I know at this point you're focused on just getting out of where you're currently at. But in my experience if your goals are too diverse you won't be able to make a serious run at any particular position or industry. Instead, try to narrow your scope to a particular industry for a while. As others have said, you have to look at your skill set and figure out how it measures up in that industry. Then, if you really want to advance in that area, you need to build yourself up to become a better candidate for the available positions - take additional courses, research the field, figure out how to make contacts in the field, attend conferences, etc.
3. Don't Rely on Your Degree
Unfortunately most of the people who really appreciate what a bachelor's degree in physics means are other people who have a bachelor's degree in physics. Most human resources people won't have much technical understanding about what their company or industry does and so won't be able to translate from your education into skills that they need. You have an education in physics. But now you need to figure out what skills you have (or what skills you can develop) that employers are going to want, and demonstrate in a clear manner that you have them.
6. Aug 25, 2016
highc23366
i made this post drunk....at 3am....so, yea ....
7. Aug 25, 2016
CalcNerd
.
8. Aug 25, 2016
NathanaelNolk
I find it intriguing that considering all the questions and good answers the people here gave you, the only thing you did was reply with an excuse.
Nidum's question is very pertinent to the matter at hand. What exactly do you know, outside of physics? What skills can you put on your resume that might attract job prospects?
Do you have any experiences outside of the physics curriculum, such as software engineering, data analysis or anything that might help? If not, are you willing to put in the time to get these skills?
Concerning grad school, have you looked into closely related fields such as computer science and engineering? What about experimental physics or medical physics?
9. Aug 25, 2016
You do realize that 20$a hour is over 35K a year right? That annual salary is actually around 17 a hour. Hopefully you didn't get an offer letter for 17 and refused, because it's basically what you asked for. 10. Aug 25, 2016 Vanadium 50 Staff Emeritus You got three pages of advice in your last thread. Did you follow any of it? Did any of it help? I can't imagine the most useful thing to do is to get drunk and repost essentially the same question. 11. Aug 25, 2016 VoloD I think is one of those post where I can help out. To the OP, have you ever thought about going back to graduate school as the others have suggested. You can always do another BS or go to a community college to get another degree. You dont seem disappointed with your school, just your major. I was (and generally am) in your boat. It is perfectly okay to get a more marketable degree after getting your first one. ALOT of people dont make it in the more marketable majors (engineering,nursing,accounting, etc.) on their first try. I would suggest going a school system that you feel is more supportive or less stressful, but that does not seem to be the problem. Look for positions like data analyst, environmental scientist, and if you are good at coding you can do some basic data entry and C++ jobs. 12. Aug 25, 2016 gleem actually$41,600 gross.
13. Aug 25, 2016
Student100
14. Aug 26, 2016
DrSteve
Can this thread be closed? The OP is not willing to take the minimal steps to improve his or her situation and is thus using up valuable real estate. In my opinion he deserves his low paying job.
15. Sep 5, 2016
VoloD
I think that is a tad harsh Dr. Steve
I understand that the OP needs to make a serious career choice. However, I do believe that collegiate education is a pathway to furthering ones job prospects. I think the OP may need to considering looking up companies that hire Physics graduates or possibly try to find ways to go back to school (be it Graduate or community)
To the OP, I went through the exact same situation as you. You can message me if you would like. I understand you like physics, but perhaps there is an area of physics which you can better translate to marketable skills. This is up for you to decided.
|
2018-01-24 01:59:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19015632569789886, "perplexity": 1132.4433353866355}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892892.86/warc/CC-MAIN-20180124010853-20180124030853-00112.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=450438
|
## Deriving pressure, density and temperature profile of atmosphere
1. The problem statement, all variables and given/known data
Derive the pressure, density and temperature profiles of an adiabatically stratified plane-parallel atmosphere under constant gravitational acceleration g. Assume that the atmosphere consists of an ideal gas of mean molecular weight $$\mu$$.
Given $$\mu$$=14u, g = 9.81m/s^2, z = 8500m, T (@sea level) = 300K, calculate temperature and pressure at the summit.
2. Relevant equations
Edit: removed the ideal gas law and barometric formula because I think I was on the wrong track with them...
3. The attempt at a solution
I have been able to derive the barometric formula (which doubles as a pressure and density profile) from the ideal gas law, but am stuck in a bit of a circular problem: I need the temperature at the top of the summit to get the pressure, and vice versa. I don't know how to proceed, or maybe I've taken the wrong approach.
Any help would be appreciated!
PhysOrg.com science news on PhysOrg.com >> Leading 3-D printer firms to merge in $403M deal (Update)>> LA to give every student an iPad;$30M order>> CIA faulted for choosing Amazon over IBM on cloud contract
Blog Entries: 7 Recognitions: Gold Member Homework Help What does "adiabatically stratified" mean? Is it that pVγ= const. ?
Quote by kuruman What does "adiabatically stratified" mean?
I interpreted it to mean that the atmosphere can be modeled as planes of thickness dz that are adiabatic.
Blog Entries: 7
Recognitions:
Gold Member
Homework Help
## Deriving pressure, density and temperature profile of atmosphere
Quote by voxel I interpreted it to mean that the atmosphere can be modeled as planes of thickness dz that are adiabatic.
They "are adiabatic" in what way? Could it be that as z changes, the product pVγ remains constant? If so you have three equations: barometric, ideal gas and adiabatic condition and three thermodynamic variables. You can eliminate any two variables and find the other in terms of z.
I think you're right in that as z changes, the product $$PV^\gamma = const$$. However, I'm not seeing how I can eliminate P and V to get T(z).. edit: clarification: I don't see how I can eliminate two of the thermodynamic variables without introducing an unknown constant.
Blog Entries: 7 Recognitions: Gold Member Homework Help Use the ideal gas law to eliminate the volume in the adiabatic condition to find an expression that says (Some power of p)*(some other power of T) = constant. Find the value of the constant from the initial conditions. Solve for the pressure and replace the expression you get for p in the barometric equation. This will give you an equation with T and z only.
Similar discussions for: Deriving pressure, density and temperature profile of atmosphere Thread Forum Replies Astrophysics 0 Introductory Physics Homework 4 Introductory Physics Homework 5 Advanced Physics Homework 0 Classical Physics 5
|
2013-06-20 11:54:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6604909896850586, "perplexity": 945.9892326325888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711515185/warc/CC-MAIN-20130516133835-00026-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.groundai.com/project/channel-simulation-and-coded-source-compression/
|
Channel Simulation and Coded Source Compression
# Channel Simulation and Coded Source Compression
## Abstract
This work establishes connection between channel simulation and coded source compression. First, we consider classical source coding with quantum side-information where the quantum side-information is observed by a helper and sent to the decoder via a classical channel. We derive a single-letter characterization of the achievable rate region for this problem. The direct part of our result is proved via the measurement compression theory by Winter, a quantum to classical channel simulation. Our result reveals that a helper’s scheme that separately conducts a measurement and a compression is suboptimal, and the measurement compression is fundamentally needed to achieve the optimal rate region. We then study coded source compression in the fully quantum regime. We characterise the quantum resources involved in this problem, and derive a single-letter expression of the achievable rate region when entanglement assistance is available. The direct coding proof is based on a combination of two fundamental protocols, namely the quantum state merging protocol and the quantum reverse Shannon theorem. Our work hence resolves coded source compression in the quantum regime.
## 1 Introduction
Source coding normally refers to the information processing task that aims to reduce the redundancy exhibited when multiple copies of the same source are used. In establishing information theory, Shannon demonstrated a fundamental result that source coding can be done in a lossless fashion; namely, the recovered source will be an exact replica of the original one when the number of copies of the source goes to infinity [1]. If representing the source by a random variable with output space and distribution , lossless source coding is possible if and only if the compression rate is above its Shannon entropy:
R≥H(X), (1)
where .
Redundancy can also exist in the scenario in which multiple copies of the source are shared by two or more parties that are far apart. Compression in this particular setting is called distributed source coding, which has been proven to be extremely important in the internet era. The goal is to minimise the information sent by each party so that the decoder can still recover the source faithfully. Shannon’s lossless source coding theorem can still be applied individually to each party. However, it is discovered that a better source coding strategy exists if the sources between different parties are correlated. Denote and the sources held by the two distant parties, where the joint distribution is and the output spaces are and , respectively. Slepian and Wolf showed that lossless distributed source coding is possible when the compression rates and for the two parties satisfy [2]:
R1 ≥H(X|Y), (2) R2 ≥H(Y|X), (3) R1+R2 ≥H(XY), (4)
where is the conditional Shannon entropy. This theorem is now called the classical Slepian-Wolf theorem [2]. In particular, when source is directly observed at the decoder, the problem is sometimes called source coding with (full) side-information.
Another commonly encountered scenario in a communication network is that a centralised server exists and its role is to coordinate all the information processing tasks, including the task of source coding, between the nodes in this network. Obviously, the role of the server is simply as a helper and it is not critical to reproduce the exact information communicated by the server. This slightly different scenario results in a completely different characterisation of the rate region, as observed by Wyner [3] and Ahlswede-Körner [4]. Consider that the receiver wants to recover the source with the assistance of the server (that we will call a helper from now on) holding , where the distribution is . Wyner and Ahlswede-Körner showed that the optimal rate region for lossless source coding of with a classical helper is the set of rate pairs such that
R1 ≥H(X|U), (5) R2 ≥I(U;Y), (6)
for some conditional distribution , and is the classical mutual information between random variables and . When there is no constraint on (i.e. can be as large as it can be), this problem reduces to source coding with (full) side-information.
The problem of source coding, when replacing classical sources with quantum sources, appears to be highly nontrivial in the first place1. The first quantum source coding theorem was established by Schumacher [5, 6]. A quantum source can be losslessly compressed and decompressed if and only if the rate is above its von Neumann entropy2:
R≥H(A)ρ, (7)
where .
Schumacher’s quantum source coding theorem bears a close resemblance to its classical counterpart. One will naturally expect that the same will hold true for the distributed source coding problem in the quantum regime. Consider that Alice, who has the quantum system of an entangled source , would like to merge her state to the distant party Bob. The quantum distributed source coding theorem (also known as quantum state merging) aims to answer the optimal rate at which quantum states with density matrix can be communicated to a party with quantum side information faithfully. As it turns out, the optimal rate is given by the conditional von Neumann entropy , a quantum generaliztion of classical conditional Shannon entropy. While the quantum formula to the distributed source coding problem is also of the form of conditional entropy, this result has a much deeper and profound impact in the theory of quantum information as it marks a clear departure between classical and quantum information theory. It is rather perplexing that the rate is quantified by the conditional entropy , which can be negative. This major piece of the puzzle was resolved with the interpretation that if the rate is negative, the state can be merged, and in addition, the two parties will gain amount of entanglement for later quantum communication [8, 9, 10]. The distributed quantum source coding problem was later fully solved [11, 13] where the trade-off rate region between the quantum communication and the entanglement resource is derived. The result is now called the fully quantum Slepian-Wolf theorem (FQSW).
Source coding with hybrid classical-quantum systems with representing a classical system and a quantum state is also considered in quantum information theory, and one of our main results falls into this category. In [14], Devetak and Winter considered classical source coding with quantum side information at the decoder, and showed that the optimal rate is given by . This result can be regarded as a classical-quantum version of the source coding with (full) side-information.
In this work, we first revisit the classical coded source compression [3, 4]. In particular, we provide a simpler achievability proof based on the idea of channel simulation (Theorem 2). The proof indicates that channel simulation is a general subroutine employed between the helper and the decoder in the task of coded source compression.
Next, we consider classical source coding with a quantum helper. In our setup, the quantum side-information is observed by the helper, and the decoder will only have a classical description from the quantum helper. Although our problem can be regarded as a classical-quantum version of the classical helper problem studied in [3, 4], in contrast to its classical counterpart, our problem does not reduce to source coding with quantum side-information studied in [14] even if there is no constraint on rate . However, when the ensemble that constitutes the quantum side-information is commutative, our problem reduces to the classical helper problem [3, 4]. We completely characterize the rate region of the classical source coding with the quantum helper problem. In fact, the formulae describing the rate region (cf. Theorem 3) resembles its classical counterpart (cf. (9) and (10)). However, the proof technique is very different due to the quantum nature of the helper. In particular, we use the measurement compression theory by Winter [23] in the direct coding theorem. One of interesting consequences of our result is that a helper’s scheme that separately conducts a measurement and a compression is suboptimal; measurement compression is fundamentally needed to achieve the optimal rate region.
Next, we extend the classical distributed source coding problem [3, 4] and its classical-quantum generalisation to the fully quantum version; namely compression of a quantum source with the help of a quantum server. Moreover, we consider a general setting where entanglement assistance between sender-decoder and helper-decoder is available. Our direct coding proof combines two fundamental quantum protocols; the state merging protocol [8, 9] and the quantum reverse Shannon theorem [28]. The current progress of coded source compression is summarized in Table 1. We remark that the quantum source compression with a classical helper is a very subtle task, which is left open even when the decoder has full classical-side information [14] (instead of partial side-information from the classical helper).
The quantum source compression with a quantum helper is treated in a different scenario in Ref. [9]. Overall there, classical communication is allowed from the helper to the receiver, and limited entanglement resource is considered. As a result, their formula requires regularization. In contract, our result resorts to quantum reverse Shannon theorem, and has the appealing single-letter expression.
There are a huge amount of work devoted to both classical and quantum lossy source coding [15, 16, 17, 18, 19, 20]. We will restrict ourselves to only noiseless source coding in this work. However, as it turns out, channel simulation simplifies both rate distortion theory and coded source compression.
Notations. In this paper, we will use capital letters etc. to denote classical random variables, and lower cases to denote their realisations. We use to denote the sample spaces. We denote .
A quantum state is a positive semi-definite matrix with trace equal to one. We will use or to denote a quantum states in this paper. In case we need to specify which party the quantum state belongs to, we will use a subscript description , meaning that the quantum system is held by A(lice). Letting be a set of orthonormal basis vectors, a classical-quantum state is written as
ρXB=∑xpX(x)|x⟩⟨x|⊗ρx,
so that copies of it is
ρ⊗nXB=∑xnp(n)X(xn)|xn⟩⟨xn|⊗ρxn,
where we denote for the sequence . A positive-operator valued measure (POVM), , is a quantum measurement whose elements are non-negative self-adjoint operators on a Hilbert space so that .
Various entropic quantities will be used in the paper. The von Neumann entropy of a quantum state , where the subscript represents the quantum state is held by A(lice), is . The conditional von Neumann entropy of system conditioned on of a bipartite state is . The quantum mutual information between two systems and of is . The conditional quantum mutual information .
We will also employ the framework of Resource Inequality (RI) [12, 13]. The RIs are a concise way of describing interconversion of resources in an information-processing task. Denote by and an ebit (maximally entangled pairs of qubits) and a noiseless qubit channel, respectively. Then a quantum channel that can faithfully transmit qubits per channel use with an unlimited amount of entanglement assistance can be symbolically represented as
⟨N⟩+∞[qq]≥Q(N)[q→q],
where is an asymptotic noisy resource that corresponds to many independent uses, i.e. . Schumacher’s noiseless source compression [5] can be similarly expressed
H(B)ρ[q→q]≥⟨ρB⟩,
which means that a rate of noiseless qubits asymptotically is sufficient to represent the noisy quantum source .
Sometimes, the RI only applies to the relative resource, , which means that the asymptotic accuracy is achieved only when uses of are fed an input of the form . For detailed treatment of combining two RIs and cancellations of quantum resources, see Ref. [12].
This paper is organised as follows. In Sec 2, we revisit classical source compression with a helper using the channel simulation idea. In Sec 3, we formally define the problem of source coding with a quantum helper, and present the main result as well as its proof. In Sec 4, we treat the source coding with a helper in the fully quantum regime. We conclude in Sec 5 with open questions.
## 2 Classical Coded Source Compression
Consider two classical channels and . We define an channel simulation code consisting of the encoding and decoding pair , that takes uses of the first channel to simulate instances of the channel output so that,
∥W⊗m2(xm)−Dn∘W⊗n1∘En(xm)∥1≤ϵ.
The channel simulation capacity of the first channel needed to simulate the second channel is defined as:
C(W1→W2)=limϵ→0limsupn→∞{mn:% an (n,m,ϵ) code exists.} (8)
The Shannon’s noiseless channel coding theorem can be seen as a special case of that is the identity channel .
Another special case is the following classical reverse Shannon Theorem where is the identity channel .
###### Theorem 1 (Classical Reverse Shannon Theorem [27])
C(id→N)=C(N).
where is the Shannon capacity of the channel .
Using the noiseless resource to simulate a noisy one seems a useless task to explore at first glance. However, it turns out that such a task can be used to simplify coded source compression conceptually (among others).
###### Theorem 2 (Classical source compression with a classical helper [3, 4])
The optimal rate region for lossless source coding of with a classical helper is the set of rate pairs such that
R1 ≥H(X|U), (9) R2 ≥I(U;Y), (10)
for some conditional distribution , and is the classical mutual information between random variables and .
With the help of Theorem 1, we can provide a simpler direct coding theorem for Theorem 2.
Proof. In the proof, the strategy that the classical helper does can be conceptually viewed as assisting the decoder to simulate the local channel . For sufficiently large, the classical communication rate that the helper needs to send is and
∥pnUX−qUnXn∥1≤ϵ,
where is the joint distribution induced by the simulation of . Thus, the full side information about is possessed by the decoder, and the source compression with full side information can be carried on. Since the helper’s local channel can be simulated at the decoder whose inaccuracy is at most , the overall error for classical source compression with a classical helper can be achieved with this additional error.
## 3 Classical Source Compression with a Quantum Helper
As shown in Figure 1, the protocol for classical source coding with a quantum helper involves two senders, Alice and Bob, and one receiver, Charlie. Initially Alice and Bob hold copies of a classical-quantum state . In this case, Alice holds classical random variables while Bob (being a helper) holds a quantum state that is correlated with Alice’s message. The goal is for the decoder Charlie to faithfully recover Alice’s message when assisted by the quantum helper Bob.
We now proceed to formally define the coding procedure. We define an code for classical source compression with a quantum helper to consist of the following:
• Alice’s encoding operation , where and ;
• Bob’s POVM , where and ;
• Charlie’s decoding operation
so that the error probability satisfies
Pr{Xn≠ˆXn}≤ϵ. (11)
A rate pair is said to be achievable if for any and all sufficiently large , there exists an code with rates and . The rate region is then defined as the collection of all achievable rate pairs. Our main result is the following theorem.
###### Theorem 3
Given is a classical-quantum source . The optimal rate region for lossless source coding of with a quantum helper is the set of rate pairs such that
R1 ≥H(X|U) (12) R2 ≥I(U;B)σ. (13)
The state resulting from Bob’s application of the POVM is
σUB(Λ)=∑u∈UpU(u)|u⟩⟨u|⊗ρu (14)
where
pU(u) =Tr(ρBΛu) (15) ρu =1pU(u)[√ρBΛu√ρB]∗ (16) ρB =∑xpX(x)ρx. (17)
where denotes complex conjugation in the standard basis. Furthermore, we can restrict the size of POVM as , where is the dimension of Bob’s system.
A typical shape of the rate region in Theorem 3 is described in Fig. 2. When there is no constraint on , rate can be decreased as small as
H(X|U∗) :=minΛH(X|U) (18) =H(X)−maxΛI(X;U) (19) =H(X)−Iacc, (20)
where is the accessible information for the ensemble . Unless the ensemble commutes [7], the minimum rate is larger than the rate , which is the optimal rate in the source coding with quantum side-information [14]. To achieve , it suffices to have , which is smaller than in general. This means that the following separation scheme is suboptimal: first conduct a measurement to get and then compress . For more detail, see the direct coding proof.
Converse.
Let be Alice’s encoder, and let be Bob’s measurement. Alice sends to the decoder, and Bob sends the measurement outcome to the decoder. The Fano’s inequality states that for some as .
First, we have the following bound:
log|M| ≥H(M) (21) ≥H(M|L) (22) ≥H(Xn|L)−H(Xn|M,L) (23) \lx@stackrel(a)≥H(Xn|L)−nϵn (24) \lx@stackrel(b)≥n∑t=1H(Xt|X
where follows from Fano’s inequality: for some as ; in (b), we use chain rule and denote ; in , we denote ; in , we introduce a time-sharing random variable that is uniformly distributed in the set .
Next, we have
log|L| ≥H(L) (28) ≥I(L;Bn) (29) =n∑t=1I(L;Bt|B
where (a) follows from
I(Xt|L,B
Following from Eq. (34), we can again introduce a time-sharing random variable that is uniformly distributed in the set ,
n∑t=1I(Ut;Bt) =nn∑t=1I(Ut;Bt|J=t) (41) =nI(UJ;BJ|J) (42) =nI(UJJ;BJ) (43)
where the last equality follows because . To get single-letter formula, define , , and and let :
R1 =1nlog|M|≥H(X|U) (44) R2 =1nlog|L|≥I(U;B). (45)
Here, we note that the distribution of can be written as
pXtρBi)}Λℓ]. (46)
Thus, we can get as a measurement outcome of by first generating , then by appending and to ancillae systems, and finally by conducting the measurement .
Finally, the bound on can be proved via Carathódory’s theorem (cf. [26, Appendix C]).
Direct Coding Theorem. Fix a POVM measurement . It induces a conditional probability , and joint probability distribution
PXU(x,u)=pX(x)pU|X(u|x). (47)
The crucial observation is the application of Winter’s measurement compression theory [23].
###### Theorem 4 (Measurement compression theorem [23, 24])
Let be a source state and a POVM to simulate on this state. A protocol for a faithful simulation of the POVM is achievable with classical communication rate and common randomness rate if and only if the following set of inequalities hold
R≥I(X;R), R+S≥H(X), (48)
where the entropies are with respect to a state of the following form:
Missing or unrecognized delimiter for \left (49)
and is some purification of the state .
Let be a random variable on , which describes the common randomness shared between Alice and Bob. Let be collection of POVMs. Let
QnX˜U(xn,un):=P(n)X(xn)∑k∈K1|K|Tr[ρxn˜Λ(k)un], (50)
where . The faithful simulation of copies of POVM , i.e. , implies that for any , there exists sufficiently large, such that there exist POVMs , where , with
12∥P(n)XU−QnX˜U∥1≤ϵ. (51)
Coding Strategy:
Alice and Bob shared copies of the state , and assume that Bob performs measurement on his quantum system whose outcome is sent to the decoder to assist decoding Alice’s message. Bob’s measurement on each copy of will induce the probability distribution according to (47). Apparently, if Bob sends the full measurement outcomes to Charlie (say bits), then Charlie can successfully decode simply from Slepian-Wolf Theorem. The next strategy is to make use of classical result since after Bob’s measurement, Alice and Bob become fully classical with joint distribution . Therefore, the minimum rate for Bob is (w.r.t. some conditional distribution ). However, there is a non-trivial quantum coding strategy. Detail follows.
Bob’s coding. Instead of the measurement performed on Bob’s system and coding w.r.t. the classical channel , the decoder Charlie can directly simulate the measurement outcome using Winter’s measurement compression theorem [23, 24]. Denote Bob’s classical communication rate . Then Theorem 4 promises that by sending from Bob to the decoder Charlie, Charlie will have a local copy and the distribution between Alice and Charlie will satisfy (51).
Alice’s coding. Now Alice’s strategy is very simple since Charlie has had . She just uses the Slepian-Wolf coding strategy as if she starts with the distribution with Charlie. In fact, it is well known (cf. [21]) that there exists an encoder and a decoder such that and
P(n)XU(Ac)≤ϵ (52)
for sufficiently large , where
A:={(xn,un)∈Xn×Un:D(φ(xn),un)=xn} (53)
is the set of correctably decodable pairs.
Now, suppose that Alice and Bob use the same code for the simulated distribution . Then, by the definition of the variational distance and (51), we have
QnX~U(Ac)≤P(n)XU(Ac)+ϵ. (54)
Thus, if we can find a good code for , we can also use that code for for sufficiently large .
Derandomization. The standard derandomization technique works here. If the random coding strategy works fine on average, then there exists one realisation works fine too. Since the distribution , and
∑k1|K|QnX˜U|K=k(Ac)=QnX~U(Ac)≤PXU(Ac)+ϵ. (55)
Thus, there exists one so that is small.
## 4 Fully Quantum Source Compression with a Quantum Helper
As shown in Figure 3, the protocol for fully quantum source coding with a quantum helper involves two senders, Alice and Bob, and one receiver, Charlie. Initially Alice and Bob hold copies of a bipartite quantum state , where Alice holds quantum systems while Bob (being a quantum helper) holds quantum systems . Moreover, there are pre-shared entangled states between Alice and Charlie, and pre-shared entangled states between Bob and Charlie. The goal is for the decoder Charlie to faithfully recover Alice’s quantum state when assisted by the quantum helper Bob.
We now proceed to formally define the coding procedure. Let be a purification of . We define an code for fully quantum source compression with a quantum helper to consist of the following:
• Alice’s encoding operation , where is a quantum system and is a classical system; Alice only sends to Charlie;
• Bob’s encoding operation , where is a quantum system of dimension ; Bob sends the quantum system to Charlie;
• Charlie’s decoding operation that produces
ωA1C1ˆAnˆLRnˆT′B=IA1⊗D(σA1MLRnT′AT′B)
where
σA1MLRnT′AT′B=EA⊗EB⊗IRn(ψ⊗nABR⊗ΦTAT′A⊗ΦTBT′B);
so that the final state satisfies
∥ωA1C1ˆAnˆLRnˆT′B−ΦA1C1⊗ρAnLRnT′B∥1≤ϵ, (56)
where is a maximally entangled state.
Let . A rate pair is said to be achievable if for any and all sufficiently large , there exists an code with rates and . The rate region is then defined as the collection of all achievable rate pairs. Our main result is the following theorem.
###### Theorem 5
Given is a bipartite quantum source . The optimal rate region for lossless source coding of with a quantum helper is the set of rate pairs such that
R1 ≥ H(A|C)ϕ (57) R2 ≥ 12I(RA;C)ϕ. (58)
The state resulting from Bob’s application of some CPTP map is
|ϕACER⟩=IRA⊗UEB→CE|ψρABR⟩. (59)
### 4.1 Direct part
#### Relevant quantum protocols
Given a bipartite state whose purification is , the state merging protocol [8, 9, 10] is the information-processing task of distributing -part of the system that originally belongs to Alice to the distant Bob without altering the joint state. Moreover, Alice and Bob have access to pre-shared entanglement and their goal is to minimise the number of EPR pairs consumed during the protocol. The state merging can be efficiently expressed as the following RI:
⟨ψA|B|R⟩+I(A;R)ψ[c→c]+H(A|B)ψ[qq]≥⟨ψ|AB|R⟩ (60)
where the notation denotes the state is originally shared between three distant parties Alice, Bob, and Eve, while means that the system is now together with system . This protocol involves classical communication; however, for the purpose of this paper, quantum resources are much more valuable and classical communication is considered to be free. As a result, the state merging protocol either consumes EPR pairs with rate when this quantity is positive, or generates rate of EPR pairs for later uses, if is negative, after the transmission of the system to .
The state merging protocol gives the first operational interpretation to the conditional von Neumann entropy. Most importantly, it provides an answer to the long-standing puzzle—the conditional von Neumann entropy could be negative, a situation that has no classical correspondence.
The fully quantum Slepian-Wolf (FQSW) protocol [11, 13] can be considered as the coherent version of the state merging protocol. It can be described as
⟨ψA|B|R⟩+12I(A;R)ψ[q→q]≥12I(A;B)ψ[qq]+⟨ψ|AB|R⟩. (61)
It is a simple exercise to show, via the resource inequalities, that the state merging protocol can be obtained by combining teleportation with the FQSW protocol [11, 13]. Moreover, the FQSW protocol can be transformed into the a version of the quantum reverse Shannon theorems (QRST) that involves entanglement assistance [11].
The quantum reverse Shannon theorem (QRST) addresses a fundamental task that asks, given a quantum channel , how much quantum communication is required from Alice to Bob so that the channel can be simulated. There are variants of the QRSTs depending on whether entanglement or feedback is allowed in the simulation (see [28, Theorem 3]). The QRST protocol has become a powerful tool in quantum information theory. It can be used to establish a strong converse to the entanglement-assisted capacity theorem. Moreover, it can also be used to establish quantum rate distortion theorems [18, 19, 20].
In this paper, we will use the QRST with entanglement assistance.
###### Theorem 6 (Quantum Reverse Shannon Theorem)
Let be a quantum channel from to so that its isometry results in the following tripartite state when inputting :
|ψRBE⟩=UNA→BE|ψρRA⟩,
where . Then with sufficient amount of pre-shared entanglement, the channel with input can be simulated with quantum communication rate :
12I(R;B)ψ[q→q]+12I(E;B)ψ[qq]≥⟨N:ρA⟩. (62)
Proof.
We use the channel simulation method. For any local channel performed by the quantum helper on his half of bipartite state , it can be simulated by the decoder using the quantum reverse Shannon theorem (QRST) (Theorem 6):
12I(RA;C)ϕ[q→q]+12I(E;C)ϕ[qq]≥⟨E:ρB⟩, (63)
where
|ϕACER⟩=IR⊗UEB→CE|ψρABR⟩.
In other words, by using the pre-shared entanglement between the helper and the decoder with rate and sending quantum message from the helper to the decoder with rate , the decoder can simulate the quantum state locally with error goes to zero in the asymptotic sense.
Alice’s coding: Once the decoder has the system , then Alice and the decoder start the state merging protocol, using the pre-shared entanglement with rate .
### 4.2 Converse part
Here, we refer to Figure 3 for corresponding labels used in the converse proof. To bound , we follow steps in the converse proof of the state merging protocol [9] and have
R1 \lx@stackrel>∼H(An|LT′B) (64) =n∑i=1H(Ai|LT′BA
where we set and in the last equality, we relabel and .
To bound the quantum communication rate , we follow steps in the converse proof of the entanglement-assisted quantum rate-distortion theorem (see equation (21) in [19]):
2R2 ≥I(LT′B;RnAn) (69) =n∑i=1I(LT′B;RiAi|R
Note that can be generated from via Bob’s local CPTP. In fact, Bob can first append the maximally entangled states , systems , and . Then, he can perform , and get .
## 5 Conclusion and Discussion
We considered the problem of compression of a classical source with a quantum helper. We completely characterised its rate region and showed that the capacity formula does not require regularisation, which is not common in the quantum setting. While the expressions for the rate region are similar to the classical result in [3, 4, 22], it requires vey different proof technique. To prove the achievability, we employed a powerful theorem, measurement compression theorem [23], that can decompose quantum measurement. A similar approach was recently applied to derive a non-asymptotic bound on the classical helper problem [25].
The rate region in our Theorem 5 bears a close resemblance to its classical counterpart. Our result also shows a helper’s strategy of simply compressing the side information and sending it to the decoder is sub-optimal with entanglement assistance. Recall the following identity:
H(C)ϕ=12I(C;E)ϕ+12I(C;RA)ϕ,
where the state is given in (59). The QRST protocol allows us to cleverly divide the amount of quantum communication required for lossless transmission of system to the decoder into pre-shared entanglement with rate and quantum communication with rate .
We will like to point out that the definition of the fully quantum source compression with a quantum helper requires to explicitly include additional quantum systems (see Eq. (56)) for a technical purpose. The reason behind this is because when the quantum state merging is performed, the target systems to which the quantum state is merged needs to be specified. We believe that the inclusion of these additional systems in the definition is inevitable, and it signals a fundamental difference between the fully quantum source compression with a quantum helper and its classical counterpart.
Note that it is possible to replace the state merging protocol with the FQSW protocol, and derive an alternative theorem for quantum source compression with a quantum helper. It is also possible to consider the same problem without entanglement assistance between the helper and the decoder. These extensions will be treated in the future.
Finally, in the classical source coding with a helper problem, it is possible to bound the dimension for the helper’s output system. However, such a result is not unknown to be possible in the quantum regime.
## Acknowledgements
MH is supported by an ARC Future Fellowship under Grant FT140100574. SW was supported in part by JSPS Postdoctoral Fellowships for Research Abroad.
### Footnotes
1. The quantum source coding result takes a much longer time to develop if one considers that quantum theory began to evolve in the mid-1920s.
2. The subscript is a label to which the quantum system belongs.
### References
1. C. E. Shannon, “A mathematical theory of communication,” Bell Syst. Tech. J., vol. 27, pp. 379–423, 623–656, 1948.
2. D. Slepian and J. K. Wolf, “Noiseless coding of correlated information sources,” IEEE Trans. Inform. Theory, vol. 19, no. 4, pp. 471–480, 1973.
3. A. D. Wyner, “On source coding with side information at the decoder,” IEEE Trans. Inform. Theory, vol. 21, no. 3, pp. 294–300, 1975.
4. R. Ahlswede and J. Körner, “Source coding with side information and a converse for the degraded broadcast channel,” IEEE Trans. Inform. Theory, vol. 21, no. 6, pp. 629–637, 1975.
5. B. Schumacher, “Quantum coding,” Phys. Rev. A, vol. 51, no. 4, pp. 2738–2747, Apr. 1995.
6. R. Jozsa and B. Schumacher, “A New Proof of the Quantum Noiseless Coding Theorem,” J. of Modern Optics, vol. 41, no. 12, pp. 2343–2349, Dec. 1994.
7. P. Hayden, R. Jozsa, D. Petz, and A. Winter, “Structure of States Which Satisfy Strong Subadditivity of Quantum Entropy with Equality,” Communications in Mathematical Physics, vol. 246, no. 2, pp. 359–374, Feb. 2004.
8. M. Horodecki, J. Oppenheim, and A. Winter, “Partial quantum information,” Nature, vol. 436, no. 7051, pp. 673–676, Aug. 2005.
9. M. Horodecki, J. Oppenheim, and A. Winter, “Quantum State Merging and Negative Information,” Communications in Mathematical Physics, vol. 269, no. 1, pp. 107–136, Oct. 2006.
10. F. Dupuis, M. Berta, J. Wullschleger, and R. Renner, “One-Shot Decoupling,” Communications in Mathematical Physics, vol. 328, no. 1, pp. 251–284, Mar. 2014.
11. A. Abeyesinghe, I. Devetak, P. M. Hayden, and A. Winter, “The mother of all protocols: restructuring quantum information’s family tree,” Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 465, no. 2108, pp. 2537–2563, Jun. 2009.
12. I. Devetak, A. W. Harrow, and A. Winter. A Resource Framework for Quantum Shannon Theory. IEEE Trans. Inform. Theory, vol. 54, no. 10, pp. 4587–4618, Oct. 2008.
13. N. Datta and M.-H. Hsieh, “The apex of the family tree of protocols: optimal rates and resource inequalities,” New J. Phys., vol. 13, no. 9, p. 093042, 2011.
14. I. Devetak and A. Winter, “Classical data compression with quantum side information,” Phys. Rev. A, vol. 68, no. 4, Oct. 2003.
15. C. E. Shannon, “Coding theorems for a discrete source with a fidelity criterion,” IRE Nat. Conv. Rec, vol. 4, pp. 142–163, 1959.
16. T. Berger, Rate Distortion Theory: A Mathematical Basis for Data Compression. Englewood Cliffs, NJ: Prentice Hall, 1971.
17. I. Devetak and T. Berger, “Quantum rate-distortion theory for memoryless sources,” IEEE Trans. Inform. Theory, vol. 48, no. 6, pp. 1580–1589, 2002.
18. N. Datta, M.-H. Hsieh, and M. M. Wilde, “Quantum Rate Distortion, Reverse ShannonTheorems, and Source-Channel Separation,” IEEE Trans. Inform. Theory, vol. 59, no. 1, pp. 615–629, 2013.
19. M. M. Wilde, N. Datta, M.-H. Hsieh, and A. Winter, “Quantum Rate-Distortion Coding With Auxiliary Resources,” IEEE Trans. Inform. Theory, vol. 59, no. 10, pp. 6755–6773, 2013.
20. N. Datta, M.-H. Hsieh, M. M. Wilde, and A. Winter, “Quantum-to-classical rate distortion coding,” J. Math. Phys., vol. 54, no. 4, p. 042201, 2013.
21. T. M. Cover and J. A. Thomas, Elements of Information Theory . Wiley, New York, 1991.
22. A. El Gamal and Y.-H. Kim, Network information theory. Cambridge University Press, 2011.
23. A. Winter, “Extrinsic” and “Intrinsic” Data in Quantum Measurements: Asymptotic Convex Decomposition of Positive Operator Valued Measures. Communications in Mathematical Physics, 244(1), 157–185, 2004.
24. M. M. Wilde, P. M. Hayden, F. Buscemi, and M.-H. Hsieh, “The information-theoretic costs of simulating quantum measurements,” Journal of Physics A: Mathematical and Theoretical, vol. 45, no. 45, pp. 453001, Nov. 2012.
25. S. Watanabe, S. Kuzuoka, and V. Y. F. Tan, “Non-Asymptotic and Second-Order Achievability Bounds for Source Coding With Side-Information,” in Proc. 2013 IEEE International Symposium on Information Theory, pp. 3055–3059.
26. I. Devetak and A. Winter, “Distillation of secret key and entanglement from quantum states,” in Proc. of The Royal Society A, vol. 461, pp. 207–235, Jan. 2005.
27. Bennett, Charles, Peter W Shor, John A Smolin, and A V Thapliyal. “Entanglement-Assisted Capacity of a Quantum Channel and the Reverse Shannon Theorem.” IEEE Transactions on Information Theory 48, no. 10 (October 2002): 2637?55. doi:10.1109/TIT.2002.802612.
28. C. H. Bennett, I. Devetak, A. W. Harrow, P. W. Shor, and A. Winter. The Quantum Reverse Shannon Theorem and Resource Tradeoffs for Simulating Quantum Channels. IEEE Trans. Inform. Theory, vol. 60, no. 5, pp. 2926–2959, 2014.
29. M.-H. Hsieh and S. Watanabe. Source Compression with a Quantum Helper. arXiv:1501.04366, 2015 (accepted for publication in ISIT 2015).
|
2018-12-11 03:28:02
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8068432211875916, "perplexity": 854.1383519957147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823550.42/warc/CC-MAIN-20181211015030-20181211040530-00335.warc.gz"}
|
https://byjus.com/question-answer/a-very-thin-transparent-film-of-soap-solution-of-negligible-thickness-is-seen-under-reflection/
|
Question
# A very thin transparent film of soap solution of negligible thickness is seen under reflection of white light. Then the colour of the film appear to be : A) blue B) black C) red D) yellow
Open in App
Solution
## For very thin films the distance travelled inside the film is insignificant and so the two reflected waves are almost exactly out of phase with each other (due to the phase change at one surface); they interfere destructively and the film appears black. For this reason when a soap film goes black it is about to burst Hence B is the correct option.
Suggest Corrections
8
|
2023-01-30 15:37:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27942946553230286, "perplexity": 824.6435831869239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499819.32/warc/CC-MAIN-20230130133622-20230130163622-00404.warc.gz"}
|
https://codeforces.com/problemset/problem/1401/E
|
E. Divide Square
time limit per test
2 seconds
memory limit per test
384 megabytes
input
standard input
output
standard output
There is a square of size $10^6 \times 10^6$ on the coordinate plane with four points $(0, 0)$, $(0, 10^6)$, $(10^6, 0)$, and $(10^6, 10^6)$ as its vertices.
You are going to draw segments on the plane. All segments are either horizontal or vertical and intersect with at least one side of the square.
Now you are wondering how many pieces this square divides into after drawing all segments. Write a program calculating the number of pieces of the square.
Input
The first line contains two integers $n$ and $m$ ($0 \le n, m \le 10^5$) — the number of horizontal segments and the number of vertical segments.
The next $n$ lines contain descriptions of the horizontal segments. The $i$-th line contains three integers $y_i$, $lx_i$ and $rx_i$ ($0 < y_i < 10^6$; $0 \le lx_i < rx_i \le 10^6$), which means the segment connects $(lx_i, y_i)$ and $(rx_i, y_i)$.
The next $m$ lines contain descriptions of the vertical segments. The $i$-th line contains three integers $x_i$, $ly_i$ and $ry_i$ ($0 < x_i < 10^6$; $0 \le ly_i < ry_i \le 10^6$), which means the segment connects $(x_i, ly_i)$ and $(x_i, ry_i)$.
It's guaranteed that there are no two segments on the same line, and each segment intersects with at least one of square's sides.
Output
Print the number of pieces the square is divided into after drawing all the segments.
Example
Input
3 3
2 3 1000000
4 0 4
3 0 1000000
4 0 1
2 0 5
3 1 1000000
Output
7
Note
The sample is like this:
|
2021-06-21 10:56:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7152118682861328, "perplexity": 354.1700683614562}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00415.warc.gz"}
|
https://viva.pressbooks.pub/openmusictheory/chapter/embellishing-tones/
|
IV. Diatonic Harmony, Tonicization, and Modulation
# Embellishing Tones
John Peterson
Key Takeaways
• A summary of the various kinds of embellishing tones is available in Example 13.
• We group embellishing tones into three categories:
1. Involving only stepwise motion: passing tones, neighbor tones
2. Involving a leap: appoggiatura, escape tone
3. Involving static notes: suspension, retardation, pedal, anticipation
# Overview
Example 1 reproduces Maria Szymanowska’s March No. 6, which we also saw in our discussion of strong predominants. You might have noticed that some of the notes in the bass in mm. 8–10 don’t fit our harmonic analysis. These notes, which are blue and circled in Example 1, are collectively called “” because they embellish notes that belong to the chord. Embellishing tones can be grouped into three categories, which we describe below.
Example 1. Embellishing tones in Maria Szymanowska, March No. 6 from Six Marches (0:00-0:16).
Nearly all embellishing tones are three-note gestures in which the embellishing tone is the middle note and the notes on each side of the embellishing tone are consonant with the bass (Example 2). The actual embellishing tone itself may be either consonant or dissonant with the bass. In almost all cases, however, the embellishing tone is a note that doesn’t belong to the underlying chord.
# Category 1: Embellishing tones that move by step
Example 1 showed the two kinds of embellishing tones that move by step: and . Passing tones are approached by step and left by step in the same direction. They may either ascend or descend (Example 3). Neighbor tones are approached by step and left by step in the opposite direction. There are upper neighbors and lower neighbors (Example 4).
Example 3. Passing tones in a two-voice texture, ascending (a) and descending (b).
Example 4. Upper neighbor (a) and lower neighbor (b) tones in a two-voice texture.
# Category 2: Embellishing tones that involve a leap
Examples 5 and 6 show the two kinds of embellishing tones that involve a leap: and . Appoggiaturas are approached by leap and left by step in the opposite direction (Example 7). The appoggiatura typically occurs on a stronger part of the beat than its surrounding notes. Escape tones are approached by step and left by leap in the opposite direction (Example 8). The escape tone typically occurs on a weaker part of the beat than its surrounding notes. Both appoggiaturas and escape tones are more commonly seen in the pattern shown in Examples 7a and 8a rather than 7b and 8b.
Example 5. An appoggiatura in Joseph Boulogne, String Quartet No. 4, I, mm. 5–9 (0:09-0:19).
Example 6. An escape tone in Margaret Casson, “The Cuckoo.”
Example 7. Appoggiaturas in a two-voice texture.
Example 8. Escape tones in a two-voice texture.
# Category 3: Embellishing tones involving static notes
Examples 9 and 10 show three of the four kinds of embellishing tones that involve static notes (i.e. notes that don’t move): , , and . A fourth kind of embellishing tone, the anticipation, deserves special comment below.
Pedal tones are often found in the bass. They consist of a series of static notes over top of which chord changes occur that do not include the bass. We typically label them using the scale degree number of the pedal note as in Example 10.
Example 11 demonstrates suspensions and retardations. Suspensions are approached by a static note and left by step down. The suspension is always on a stronger part of the beat than its surrounding notes. Retardations are approached by a static note and left by step up. The retardation is always on a stronger part of the beat than its surrounding notes.
Example 9. Suspensions and a retardation in Joseph Boulogne’s String Quartet No. 4, I, mm. 47–49 (1:30–1:36).
Example 10. Pedal tone in Josephine Lang’s “Dem Königs-Sohn,” mm. 16–18.
Example 11. Suspension (a) and retardation (b) in a two-voice texture.
### Anticipations
Like the suspension, retardation, and pedal tone, also involve static notes. But anticipations are a two-note (rather than three-note) gesture, in which a chord tone is heard early as a non-chord tone (Example 12). In other words, it “anticipates” its upcoming membership in a chord.
Example 12. An anticipation in Josephine Lang’s “Erinnerung,” mm. 29–30 (1:54–1:59).
# Summary
The table in Example 13 provides a summary of the embellishing tones covered in this chapter.
Category Embellishing Tone Approached by Left by Direction Additional Detail
Involving steps Passing Tone (PT) Step Step Same May ascend or descend
Neighbor Tone (NT) Step Step Opposite Both upper and lower exist
Involving a leap Appoggiatura (APP) Leap Step Opposite Appoggiatura is usually on a strong part of the beat
Escape Tone (ET) Step Leap Opposite Escape tone is usually on a weaker part of the beat
Involving static notes Suspension Static note Step Down Suspension is always on a stronger part of the beat
Retardation Static note Step Up Retardation is always on a stronger part of the beat
Pedal Tone ( $\hat{x}$ Ped) Static note Static note N/A
Anticipation (ANT) N/A Static note N/A The anticipation is usually on a weaker part of the beat
Example 13. Summary of embellishing tones.
Assignments
1. Embellishing tones (.pdf, .docx). Asks students to write embellishing tones in a two-voice texture and label embellishing tones in an excerpt.
|
2022-05-22 08:16:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33736345171928406, "perplexity": 6446.549709009176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545090.44/warc/CC-MAIN-20220522063657-20220522093657-00578.warc.gz"}
|
https://chat.stackoverflow.com/transcript/68414/2022/7/5
|
8:56 AM
0
Prior to April, I was able to run this PS1 script to install Adobe acrobat and zoom. The script was able to check the version on the site (s) and download and install... However, from April to here I have to specify the version (current) in order to download and install... Can anyone help me to h...
5 hours later…
1:46 PM
3
I'm working on a question from a past exam paper, Show that the set of solutions $G_{d,p}$ to Pell's equation $x^2-dy^2=1$ modulo $p$ is a finite abelian group, and compute the order of this group for $p=5$ and all $d$. Just proving associativity took 6 page-width lines of fairly meticulous wor...
2:27 PM
7
For nice topological spaces (say Haudorff spaces) $X$ and $Y$, there is a bijection between continuous maps $X\to Y$ and isomorphism classes of geometric morphisms $\mathrm{Sh}(X)\to \mathrm{Sh}(Y)$. Question: Is there a similar statement for "nice" schemes, i.e., that morphisms of schemes $X\to ... 2 hours later… 3:57 PM 1 Let$E$be a nowhere dense subset of$\mathbb{R}\times \mathbb{R}$. For$x\in \mathbb{R}$, define $$E_x=\{ y\in\mathbb{R}\mid (x,y)\in E\}.$$ Let$D$denote the set of$x$for which$E_x$is NOT nowhere dense in$\mathbb{R}$. By the Kuratowski-Ulam Theorem, we know that$D$is of first cateogory ... 6 hours later… 10:27 PM 4 Let$k$be a nonarchimedean local field and$G$a reductive$k$-group, which we assume to be semisimple and simply-connected. Recall that an abstract group$H$is perfect if it is generated by commutators, that is, equals its derived subgroup. Question: Is$G(k)$perfect? When$G$is isotropic,$...
10:40 PM
1
I want to write a number 1234 in the form 1 234. I tried \documentclass[12pt,a4paper]{article} \usepackage{amsmath} \begin{document} The number $1234$ is written in the form $1\,234$; The number $123456789$ is written in the form $123\,456\,789$. \end{document} How can I make it automati...
|
2022-08-08 01:49:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9243088364601135, "perplexity": 514.4989284943407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570741.21/warc/CC-MAIN-20220808001418-20220808031418-00743.warc.gz"}
|
http://www.chegg.com/homework-help/questions-and-answers/calculate-l-c-values-l-network-match-9-ohms-transistor-power-amplifier-75-ohms-antenna-122-q2543282
|
## pls help
Calculate the L and C values of an L network that is to match 9 ohms transistor power amplifier to a 75 ohms antenna at 122 MHz.
|
2013-05-22 05:31:00
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8731569647789001, "perplexity": 2509.1502143676094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701370254/warc/CC-MAIN-20130516104930-00016-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://denisegaskins.com/tag/euclid/
|
Remember the Math Adventurer’s Rule: Figure it out for yourself! Whenever I give a problem in an Alexandria Jones story, I will try to post the answer soon afterward. But don’t peek! If I tell you the answer, you miss out on the fun of solving the puzzle. So if you haven’t worked these problems yet, go back to the original post. Figure them out for yourself — and then check the answers just to prove that you got them right.
Euclid’s Geometric Algebra
## Euclid’s Geometric Algebra
Picture from MacTutor Archives.
After the Pythagorean crisis with the square root of two, Greek mathematicians tried to avoid working with numbers. Instead, the Greeks used geometry to demonstrate mathematical concepts. A line can be drawn any length, so straight lines became a sort of non-algebraic variable.
You can see an example of this in The Pythagorean Proof, where Alexandria Jones represented the sides of her triangle by the letters a and b. These sides may be any length. The sizes of the squares will change with the triangle sides, but the relationship $a^2 + b^2 = c^2$ is always true for every right triangle.
## Math History on the Internet
[Image from the MacTutor Archive.]
The story of mathematics is the story of interesting people. What a shame it is that our children see only the dry remains of these people’s passion. By learning math history, our students will see how men and women wrestled with concepts, made mistakes, argued with each other, and gradually developed the knowledge we today take for granted.
In a previous article, I recommended books that you may find at your local library or be able to order through inter-library loan. Now, let me introduce you to the wealth of math history resources on the Internet.
## Euclid’s Game on a Hundred Chart
Math concepts: subtraction within 100, number patterns, mental math
Number of players: 2 or 3
Equipment: printed hundred chart (also called a hundred board), and highlighter or translucent disks to mark numbers — or use this online hundred chart
## Set Up
Place the hundred chart and highlighter where all players can reach them.
## How to Play
• Allow the youngest player choice of moving first or second; in future games, allow the loser of the last game to choose.
• The first player chooses a number from 1 to 100 and marks that square on the hundred chart.
• The second player chooses and marks any other number.
• On each succeeding turn, the player subtracts any two marked numbers to find and mark a difference that has not yet been taken.
• Play alternates until no more numbers can be marked.
## Quotations X: The Royal Road
More quotations especially for teachers:
There is no Royal Road to Geometry.
Teaching is the royal road to learning.
The title which I most covet is that of teacher. The writing of a research paper and the teaching of freshman calculus, and everything in between, falls under this rubric. Happy is the person who comes to understand something and then gets to explain it.
Claim your two free learning guide booklets, and be one of the first to hear about new books, revisions, and sales or other promotions.
## Hints and Solutions: Patty Paper Trisection
No peeking! This post is for those of you who have given the trisection proof a good workout on your own. If you have a question about the proof or a solution you would like to share, please post a comment here.
But if you haven’t yet worked at the puzzle, go back and give it a try. When someone just tells you the answer, you miss out on the fun. Figure it out for yourself — and then check the answer just to prove that you got it right.
|
2017-05-28 12:14:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36311185359954834, "perplexity": 1052.4477263312585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609817.29/warc/CC-MAIN-20170528120617-20170528140617-00232.warc.gz"}
|
https://answers.opencv.org/question/197365/standard-velocity-unit/
|
# Standard Velocity Unit
ello everyone!
I'm currently using this code, to extract clusters from a Point Cloud and track them via a Kalman Filter with constant velocity: GitHub Source
Esentially it's segmenting a pointcloud into clusters of interest, based on euclidean distance. For each cluster it retrieves the cluster center (using PCL).
To associate the clusters between frames consistenly, OpenCV is used, to apply a hungarian algorithm to compare the euclidean distance of the cluster centers and pair them with the minimum distance cost.
A Kalman Filter from the OpenCV library is then applied to track the clusters across a path.
Right now I'm trying to publish the Position and Velocity Values above the segmented Clusters. I'm doing it by using:
KF0.statePost.at<float>(0)
KF0.statePost.at<float>(1)
KF0.statePost.at<float>(2)
KF0.statePost.at<float>(3)
(for all the Kalman Filters KFi I've initialized, 6 in total)
0 and 1 give me the correct X and Y Position, which I have verified, so 2 and 3 should be the corresponding velocity, however I do not know what the Units are. It cant be m/s because its of the order 10^(-6) and I can't think of any reasonable Unit. Maybe its an angle?
I've read online about pixel/second but that would be a) even more unlikely, because it should be higher than m/s and b) I'm only working with pointclouds and no image, so I'm not sure where I can find a pixel to meter relation.
Thanks in advance and best regards
edit retag close merge delete
|
2019-07-23 13:06:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17591619491577148, "perplexity": 1034.2101703577334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529406.97/warc/CC-MAIN-20190723130306-20190723152306-00165.warc.gz"}
|
https://mran.revolutionanalytics.com/snapshot/2020-12-31/web/packages/KernelKnn/vignettes/binary_classification_using_the_ionosphere_data.html
|
# binary classification using the ionosphere data
#### 2019-11-29
The following examples illustrate the functionality of the KernelKnn package for classification tasks. I’ll make use of the ionosphere data set,
data(ionosphere, package = 'KernelKnn')
apply(ionosphere, 2, function(x) length(unique(x)))
## V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11 V12
## 2 1 219 269 204 259 231 260 244 267 246 269
## V13 V14 V15 V16 V17 V18 V19 V20 V21 V22 V23 V24
## 238 266 234 270 254 280 254 266 248 265 248 264
## V25 V26 V27 V28 V29 V30 V31 V32 V33 V34 class
## 256 273 256 281 244 266 243 263 245 263 2
# the second column will be removed as it has a single unique value
ionosphere = ionosphere[, -2]
When using an algorithm where the ouput depends on distance calculation (as is the case in k-nearest-neighbors) it is recommended to first scale the data,
# recommended is to scale the data
X = scale(ionosphere[, -ncol(ionosphere)])
y = ionosphere[, ncol(ionosphere)]
important note : In classification, both functions KernelKnn and KernelKnnCV accept a numeric vector as a response variable (here y) and the unique values of the labels should begin from 1. This is important otherwise the internal functions do not work. Furthermore, both functions (by default) return predictions in form of probabilities, which can be converted to labels by using either a threshold (if binary classification) or the maximum value of each column (if multiclass classification).
# labels should be numeric and begin from 1:Inf
y = c(1:length(unique(y)))[ match(ionosphere$class, sort(unique(ionosphere$class))) ]
# random split of data in train and test
spl_train = sample(1:length(y), round(length(y) * 0.75))
spl_test = setdiff(1:length(y), spl_train)
str(spl_train)
## int [1:263] 152 100 134 277 322 64 305 167 11 177 ...
str(spl_test)
## int [1:88] 1 5 14 15 16 20 21 28 29 32 ...
# evaluation metric
acc = function (y_true, preds) {
out = table(y_true, max.col(preds, ties.method = "random"))
acc = sum(diag(out))/sum(out)
acc
}
## The KernelKnn function
The KernelKnn function takes a number of arguments. To read details for each one of the arguments type ?KernelKnn::KernelKnn in the console.
A simple k-nearest-neighbors can be run with weights_function = NULL and the parameter ‘regression’ should be set to FALSE. In classification the Levels parameter takes the unique values of the response variable,
library(KernelKnn)
preds_TEST = KernelKnn(X[spl_train, ], TEST_data = X[spl_test, ], y[spl_train], k = 5 ,
method = 'euclidean', weights_function = NULL, regression = F,
Levels = unique(y))
head(preds_TEST)
## class_1 class_2
## [1,] 0.2 0.8
## [2,] 0.2 0.8
## [3,] 0.0 1.0
## [4,] 0.0 1.0
## [5,] 0.6 0.4
## [6,] 0.2 0.8
There are two ways to use a kernel in the KernelKnn function. The first option is to choose one of the existing kernels (uniform, triangular, epanechnikov, biweight, triweight, tricube, gaussian, cosine, logistic, silverman, inverse, gaussianSimple, exponential). Here, I use the canberra metric and the tricube kernel because they give optimal results (according to my RandomSearchR package),
preds_TEST_tric = KernelKnn(X[spl_train, ], TEST_data = X[spl_test, ], y[spl_train], k = 10 ,
method = 'canberra', weights_function = 'tricube', regression = F,
Levels = unique(y))
head(preds_TEST_tric)
## [,1] [,2]
## [1,] 0.0000000 1.0000000000
## [2,] 0.0000000 1.0000000000
## [3,] 0.5635877 0.4364123451
## [4,] 0.1441363 0.8558636754
## [5,] 0.9995187 0.0004813259
## [6,] 0.8994787 0.1005212960
The second option is to give a self defined kernel function. Here, I’ll pick the density function of the normal distribution with mean = 0.0 and standard deviation = 1.0 (the data are scaled to have mean zero and unit variance),
norm_kernel = function(W) {
W = dnorm(W, mean = 0, sd = 1.0)
W = W / rowSums(W)
return(W)
}
preds_TEST_norm = KernelKnn(X[spl_train, ], TEST_data = X[spl_test, ], y[spl_train], k = 10 ,
method = 'canberra', weights_function = norm_kernel, regression = F,
Levels = unique(y))
head(preds_TEST_norm)
## [,1] [,2]
## [1,] 0.0000000 1.00000000
## [2,] 0.0000000 1.00000000
## [3,] 0.4334149 0.56658510
## [4,] 0.1869283 0.81307169
## [5,] 0.9138637 0.08613632
## [6,] 0.8989750 0.10102495
The computations can be speed up by using the parameter threads (multiple cores can be run in parallel). There is also the option to exclude extrema (minimum and maximum distances) during the calculation of the k-nearest-neighbor distances using extrema = TRUE. The bandwidth of the existing kernels can be tuned using the h parameter.
K-nearest-neigbor calculations in the KernelKnn function can be accomplished using the following distance metrics : euclidean, manhattan, chebyshev, canberra, braycurtis, minkowski (by default the order ‘p’ of the minkowski parameter equals k), hamming, mahalanobis, pearson_correlation, simple_matching_coefficient, jaccard_coefficient and Rao_coefficient. The last four are similarity measures and are appropriate for binary data [0,1].
I employed my RandomSearchR package to find the optimal parameters for the KernelKnn function and the following two pairs of parameters give an optimal accuracy,
k method kernel
10 canberra tricube
9 canberra epanechnikov
## The KernelKnnCV function
I’ll use the KernelKnnCV function to calculate the accuracy using 5-fold cross-validation for the previous mentioned parameter pairs,
fit_cv_pair1 = KernelKnnCV(X, y, k = 10 , folds = 5, method = 'canberra',
weights_function = 'tricube', regression = F,
Levels = unique(y), threads = 5, seed_num = 5)
str(fit_cv_pair1)
## List of 2
## $preds:List of 5 ## ..$ : num [1:71, 1:2] 0.00648 0.25323 1 0.97341 0.92031 ...
## ..$: num [1:70, 1:2] 0 0 0 0 0.999 ... ## ..$ : num [1:70, 1:2] 0.353 0 0.17 0.212 0.266 ...
## ..$: num [1:70, 1:2] 0 0 0 0 0 ... ## ..$ : num [1:70, 1:2] 0.989 0 1 0 0 ...
## $folds:List of 5 ## ..$ fold_1: int [1:71] 5 26 233 243 30 41 237 229 19 11 ...
## ..$fold_2: int [1:70] 262 89 257 67 58 266 253 85 275 268 ... ## ..$ fold_3: int [1:70] 127 128 295 287 134 288 130 277 125 101 ...
## ..$fold_4: int [1:70] 313 301 317 318 316 142 175 157 146 147 ... ## ..$ fold_5: int [1:70] 195 326 225 332 342 347 206 219 218 214 ...
fit_cv_pair2 = KernelKnnCV(X, y, k = 9 , folds = 5,method = 'canberra',
weights_function = 'epanechnikov', regression = F,
Levels = unique(y), threads = 5, seed_num = 5)
str(fit_cv_pair2)
## List of 2
## $preds:List of 5 ## ..$ : num [1:71, 1:2] 0.0224 0.255 1 0.9601 0.8876 ...
## ..$: num [1:70, 1:2] 0 0 0 0 0.998 ... ## ..$ : num [1:70, 1:2] 0.36 0 0.164 0.185 0.202 ...
## ..$: num [1:70, 1:2] 0 0 0 0 0 ... ## ..$ : num [1:70, 1:2] 0.912 0 1 0 0 ...
## $folds:List of 5 ## ..$ fold_1: int [1:71] 5 26 233 243 30 41 237 229 19 11 ...
## ..$fold_2: int [1:70] 262 89 257 67 58 266 253 85 275 268 ... ## ..$ fold_3: int [1:70] 127 128 295 287 134 288 130 277 125 101 ...
## ..$fold_4: int [1:70] 313 301 317 318 316 142 175 157 146 147 ... ## ..$ fold_5: int [1:70] 195 326 225 332 342 347 206 219 218 214 ...
Each cross-validated object returns a list of length 2 ( the first sublist includes the predictions for each fold whereas the second gives the indices of the folds)
acc_pair1 = unlist(lapply(1:length(fit_cv_pair1$preds), function(x) acc(y[fit_cv_pair1$folds[[x]]],
fit_cv_pair1$preds[[x]]))) acc_pair1 ## [1] 0.9154930 0.9142857 0.9142857 0.9285714 0.9571429 cat('accurcay for params_pair1 is :', mean(acc_pair1), '\n') ## accurcay for params_pair1 is : 0.9259557 acc_pair2 = unlist(lapply(1:length(fit_cv_pair2$preds),
function(x) acc(y[fit_cv_pair2$folds[[x]]], fit_cv_pair2$preds[[x]])))
acc_pair2
## [1] 0.9014085 0.9142857 0.9000000 0.9142857 0.9571429
cat('accuracy for params_pair2 is :', mean(acc_pair2), '\n')
## accuracy for params_pair2 is : 0.9174245
In the KernelKnn package there is also the option to combine kernels (adding or multiplying) from the existing ones. For instance, if I want to multiply the tricube with the gaussian kernel, then I’ll give the following character string to the weights_function, “tricube_gaussian_MULT”. On the other hand, If I want to add the same kernels then the weights_function will be “tricube_gaussian_ADD”. I experimented with my RandomSearchR package combining the different kernels and the following two parameter settings gave optimal results,
k method kernel
16 canberra biweight_triweight_gaussian_MULT
5 canberra triangular_triweight_MULT
fit_cv_pair1 = KernelKnnCV(X, y, k = 16, folds = 5, method = 'canberra',
weights_function = 'biweight_triweight_gaussian_MULT',
regression = F, Levels = unique(y), threads = 5,
seed_num = 5)
str(fit_cv_pair1)
## List of 2
## $preds:List of 5 ## ..$ : num [1:71, 1:2] 0.0015 0.1516 1 0.9763 0.9674 ...
## ..$: num [1:70, 1:2] 0 0 0 0 0.999 ... ## ..$ : num [1:70, 1:2] 0.249 0 0.113 0.252 0.27 ...
## ..$: num [1:70, 1:2] 0 0 0 0 0 ... ## ..$ : num [1:70, 1:2] 0.991 0 1 0 0 ...
## $folds:List of 5 ## ..$ fold_1: int [1:71] 5 26 233 243 30 41 237 229 19 11 ...
## ..$fold_2: int [1:70] 262 89 257 67 58 266 253 85 275 268 ... ## ..$ fold_3: int [1:70] 127 128 295 287 134 288 130 277 125 101 ...
## ..$fold_4: int [1:70] 313 301 317 318 316 142 175 157 146 147 ... ## ..$ fold_5: int [1:70] 195 326 225 332 342 347 206 219 218 214 ...
fit_cv_pair2 = KernelKnnCV(X, y, k = 5, folds = 5, method = 'canberra',
weights_function = 'triangular_triweight_MULT',
regression = F, Levels = unique(y), threads = 5,
seed_num = 5)
str(fit_cv_pair2)
## List of 2
## $preds:List of 5 ## ..$ : num [1:71, 1:2] 0 0.0273 1 1 1 ...
## ..$: num [1:70, 1:2] 0 0 0 0 1 ... ## ..$ : num [1:70, 1:2] 0.1161 0 0.0105 0.307 0.022 ...
## ..$: num [1:70, 1:2] 0 0 0 0 0 ... ## ..$ : num [1:70, 1:2] 1 0 1 0 0 ...
## $folds:List of 5 ## ..$ fold_1: int [1:71] 5 26 233 243 30 41 237 229 19 11 ...
## ..$fold_2: int [1:70] 262 89 257 67 58 266 253 85 275 268 ... ## ..$ fold_3: int [1:70] 127 128 295 287 134 288 130 277 125 101 ...
## ..$fold_4: int [1:70] 313 301 317 318 316 142 175 157 146 147 ... ## ..$ fold_5: int [1:70] 195 326 225 332 342 347 206 219 218 214 ...
acc_pair1 = unlist(lapply(1:length(fit_cv_pair1$preds), function(x) acc(y[fit_cv_pair1$folds[[x]]],
fit_cv_pair1$preds[[x]]))) acc_pair1 ## [1] 0.9014085 0.9142857 0.9285714 0.9285714 0.9571429 cat('accuracy for params_pair1 is :', mean(acc_pair1), '\n') ## accuracy for params_pair1 is : 0.925996 acc_pair2 = unlist(lapply(1:length(fit_cv_pair2$preds),
function(x) acc(y[fit_cv_pair2$folds[[x]]], fit_cv_pair2$preds[[x]])))
acc_pair2
## [1] 0.9014085 0.9285714 0.9285714 0.9142857 0.9714286
cat('accuracy for params_pair2 is :', mean(acc_pair2), '\n')
## accuracy for params_pair2 is : 0.9288531
|
2022-12-07 00:12:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2130315601825714, "perplexity": 3844.211625275917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711121.31/warc/CC-MAIN-20221206225143-20221207015143-00032.warc.gz"}
|
http://quant.stackexchange.com/questions?page=4&sort=unanswered
|
# All Questions
95 views
### Event studies using revenue data vs. measuring abnormal returns
This may be a silly question, but does there exist a methodology for examining the impact of "events" on companies that are not publicly traded? I suppose it would look at abnormal revenues rather ...
60 views
### Portfolio insurance with a coherent risk measure (CVaR)
I would like to analysis of portfolio insurance under a coherent risk-measure method (CVaR), How can I achieve that? Is there a way to turn the problem into a linear programming problem? or to ...
78 views
### What does it mean to adjust for short-run liquidity in finding risk-free rate of return
Risk-free rate of return should equal the expected long-run growth rate of the economy with an adjustment for short-run liquidity. What is meant by the last phrase, "adjustment for short-run ...
185 views
### Philips-Ouliaris test for cointegration
I'm trying to implement cointegration tests using the R urca package. I've figured out the Johansen test (ca.jo), but I'm having trouble with the Philips-Ouliaris test (ca.po). I have two questions: ...
160 views
### Error term/Innovation process in ARCH/GARCH processes?
I am wondering about the distribution of the error term/innovation process in a ARCH/GARCH process and its implementation, I am not sure about some points. The basic assumption is ...
60 views
### Is there an appropriate sequence to tests during model diagnosis?
How should one order (sequence) the following tests? Stationarity test Johansen cointegration test Normality/Histogram test Autocorrelation test Heteroskedasticity test Multicollinearity test ...
144 views
### How replicate data using PCA
I have a set of date covering petrol prices. My example has two columns where each row represents a sequential date. ...
88 views
### Benchmarking risk
Given the portfolio return $R$ and the benchmark return $B$, I want to define a risk indicator, measuring the ability to beat the benchmark ($R>B$), given the downside risk taken; the latter not ...
299 views
### How to correctly construct a value- and equally weighted portfolio consisting of property-types?
A problem of which I couldn’t find the answer on the forum is about the construction of equally-weighted and value-weighted portfolio. I want to compute the equally-weighted property-type portfolio ...
78 views
### What does negative gamma mean in APGARCH model?
I got a gamma of -0.1321677. ...
266 views
### Gamma vs. Volatility Risk
Original Question: What is the link between Gamma and the Volatility Risk? It leads me to ask: - What is the Volatility Risk definition and what are the good practices to measure it? Thinking about ...
337 views
### Does the geometric Ornstein-Uhlenbeck process have stationary variance?
I know that the long run variance of the standard OU process is $\lim_{s\rightarrow \infty}\mbox{Var}(P_{t+s}|P_t) = \frac{\sigma^2}{2\theta}$ I'm using the geometric version of the process. I ...
131 views
### EUR/PLN and EUR/USD delta-term-vol surface quoting convension
does anyone know for sure what is the FX market convension to quote delta-term pairs for EUR/PLN, for EUR/USD. I know that for EUR/PLN it should be delta p.a forward, for EUR/USD it should be delta ...
255 views
### Markov-Switching E-GARCH with R
I am looking for a R library for modeling a Markov-Switching E-GARCH process. In other questions at StackExchange related to GARCH models, the package rugarch is often mentionned. Do you recommend it ...
67 views
### How to work out weights for a portfolio based on an inverse ratio with positive and negative values?
I am trying to work out how to determine weights for the assets in order to form a portfolio. The ratio I am using is EV/EBIT, hence the smaller the better. The problem is I don't know how to handle ...
206 views
### Yield Curve Volatility
Let you have several issuers, and let each issuer have its yield curve built up with liquid plain vanilla fixed rate bonds. Each yield curve has its slope and its curvature, and they obviously change ...
128 views
...
114 views
### Is it possible to model general wrong way risk via concentration risk?
General wrong way risk is defined as due to a positive correlation between the level of exposure and the default probability of the counterparty due to general market factors. (Specific wrong way risk ...
74 views
### Changes to option valuation for dollar-pegged underlying
In Russia, options on futures on the RTS index are priced in points instead of currency, with points being directly related to the value of the US dollar such that, for example, if the dollar rises, ...
121 views
### Measure change in a bond option problem
This is not a homework or assignment exercise. I'm trying to evaluate $\displaystyle \ \ I := E_\beta \big[\frac{1}{\beta(T_0)} K \mathbf{1}_{\{B(T_0,T_1) > K\}}\big]$, where $\beta$ is the ...
151 views
### Optimal Position Size with Transaction Costs given Forecast Mean and StDev
I have rather a challenging question. I'm hoping that someone can share their experience. I will build up the problem in steps. Let's start our thinking with the idea of a buy and hold strategy of ...
97 views
### Combining Mulitple Forecasts? Budged Constraints?
I'm hoping that someone can lend a hand. I have been reading various papers on how to combine multiple forecast time series. The main paper is Granger and Bates 1969. The suggestion here is that there ...
118 views
### What are the proper metrics to look at for checking discrepancies in these two time series
I am obtaining bid/ask price and volume market data from two different sources for the same ticker and for the same day and checking to see that at time intervals X they are "roughly the same". The ...
113 views
### Probability Density of Returns of Bonus Certificates
Could anyone please help me with the following? I need to generate a histogram (resp. probability density) of returns of a bonus-certificate. A bonus-certificate can be replicated by an underlying ...
90 views
### Difference between kappa and delta in mixed-effects model
(This question is a crosspost from Cross Validated) I have a following stochastic model describing evolution of a process (Y) in space and time. Ds and Dt are domain in space (2D with x and y axes) ...
432 views
### Can the Heston model be shown to reduce to the original Black Scholes model if appropriate parameters are chosen?
Summary For Heston model parameters that render the variance process constant, the solution should revert to plain Black-Scholes. Closed from solutions to the Heston model don't seem to do this, even ...
176 views
### Is inverted Japanese style curve persistent when negative rates are real / market - observed?
The time evolution of inverted curves does model / forecast a future recession and not necessarily contains the current liquidity- / credit-related aspect. The historical Japanese style inverted yield ...
116 views
### Stochastic discount factor (aka deflator or pricing kernel) and class D processes
When (under what assumptions on the model) does a Stochastic Discount Factor need to be of Class D? What would be the implications if it was not? Is it connected to one of the no-arbitrage notions?
1k views
### Backtest pair trade strategy in R
I am looking for some tips on how to run a simple backtest on a pairtrading strategy intraday using eg. 30minute bars. I have calculated the spread, ...
280 views
### What is the use of the Euler equation in the Ramsey growth model?
I apologise for being brief, but I don't understand how is Euler equation used in the Ramsey growth model. I am reading a textbook "Dynamic General Equilibrium Modeling" and there is mentioned about ...
|
2013-12-11 15:57:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8649187684059143, "perplexity": 2139.664225122642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164038538/warc/CC-MAIN-20131204133358-00084-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/solving-a-tricky-integral-to-normalize-a-wave-function.569928/
|
# Homework Help: Solving a tricky integral to normalize a wave function
1. Jan 22, 2012
1. The problem statement, all variables and given/known data
A particle of mass m is moving in one dimension in a potential V(x,t). The wave
function for the particle is: ψ = Axe^([-sqrt(km)/2h_bar]*x^2)e^([-isqrt(k/m)]*3t/2). For -infinitity < x < infinity, where k and A are constants. Normalize this wave function.
2. Relevant equations
Normalization of wave function in one dimension:
∫ψ*ψdx = 1
3. The attempt at a solution
So I said that ψ* would be the same as ψ except i would be negative, so ψ*ψ would be A^2*x^2*e^([-sqrt(km)/h_bar]*x^2). To normalize, ∫ψ*ψdx = A^2*∫x^2*e^([-sqrt(km)/h_bar]*x^2)dx = 1, with the integration being from negative infinity. I am mostly just having trouble getting through this integral. I tried integration by parts:
u = x^2, du = 2xdx
v = [-h_bar/sqrt(km)] * [1/2x] * e^([-sqrt(km)/h_bar]*x^2), dv = e^-([sqrt(km/h_bar]*x^2) dx.
So ∫ψ*ψdx = A^2 [uv-∫vdu] = A^2 [[x^2*[-h_bar/sqrt(km)] * [1/2x] * e^([-sqrt(km)/h_bar]*x^2)] (evaluated from negative infinity to positive infinity) - ∫[-h_bar/sqrt(km)] * e^([-sqrt(km)/h_bar]*x^2) dx, again from negative infinity to positive infinity.
So for the first term, I don't know how to evaluate that from negative infinity to infinity because there is a 2x in the denominator and an e^(x^2) in the numerator, and the only way I really know how to evaluate is by just plugging values in. I also don't see an obvious way to evaluate that integral. So is there a way to evaluate this or is integration by parts just not going to work for me? Once I find this integral I can just take the square root of its reciprocal to get the value of A.
This is my first post so I have read the rules and am hoping I have posted in the right forum, etc. Any help on this problem would be very much appreciated.
2. Jan 22, 2012
### Redbelly98
Staff Emeritus
Welcome to PF!
You won't be able to solve that integral analytically. But since this is physics (not math), surely you are allowed to look it up? Check in a table of integrals, but be sure to look in the definite integrals section. It will probably look something like
0 x2 e-x2 = ___?
or
0 x2 e-ax2 = ___?
3. Jan 22, 2012
Thanks for your response! Darn, that is something we have little experience with and my teacher made it sound like we should know how to evaluate this. So since this integral runs from -infinity to infinity, would I double the integral from 0 to infinity?
The fact that this can't be solved analytically actually makes me suspicious, are you sure that I am trying to evaluate the right integral based on the problem?
4. Jan 22, 2012
### Redbelly98
Staff Emeritus
Yes. When the integrand is an even function, as it is here, you can do that.
Looks right to me. Just to be sure, we have
$$\Psi = A \ x \ e^{-\sqrt{km} \ \cdot \ x^2 / (2 \hbar)} \cdot e^{-i \sqrt{k/m} \ \cdot \ 3 t/2},$$
correct?
5. Jan 22, 2012
Yep that's right, and I finally got a response from my prof, evidently this is called the gamma function and you do have to look it up. Thanks so much for your help though!
6. Jan 22, 2012
### Redbelly98
Staff Emeritus
You're welcome!
7. Feb 16, 2012
### kviksand81
What was/is the answer to this problem? I have the exact same problem and just can't make it work!
8. Feb 16, 2012
### Ray Vickson
You need to evaluate $I = \int_{-\infty}^{\infty} x^2 e^{-a x^2} dx,$ where a > 0. You can integrate by parts, using $u = x, \: dv = x e^{-ax^2} dx = (1/2a) d(ax^2) e^{-ax^2}.$ You will end up having to do an integral of the form $\int_{-\infty}^{\infty} e^{-ax^2} dx,$ which you are certainly supposed to have seen before.
RGV
9. Feb 16, 2012
### Redbelly98
Staff Emeritus
Please show us what you have tried towards solving this. Also, you should familiarize yourself with our forum policy on getting help with homework problems by reading the section titled Homework Help at this link:
|
2018-10-17 15:52:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.802597165107727, "perplexity": 538.1123626038424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511203.16/warc/CC-MAIN-20181017153433-20181017174933-00015.warc.gz"}
|
https://tex.stackexchange.com/questions/407534/reduce-gap-between-subfloat-and-subcaption
|
# Reduce gap between subfloat and subcaption
How can I reduce the gap between the subfloat and its caption? I tried to use
\captionsetup[subfigure]{aboveskip=0pt}
but it doesnt work for me. Below is the snippet i have and the output
\begin{figure}
\captionsetup[subfigure]{labelformat=empty}
\captionsetup[subfigure]{aboveskip=0pt}
\centering
\subfloat[Ground Truth]{{\includegraphics[width=\textwidth]{ood-ms/ood-1-gt.png} }}
\\ \vspace{-1\baselineskip}
\subfloat[Recurrent Neural Network]{{\includegraphics[width=\textwidth]{ood-ms/ood-1-rnn.png} }}
\\ \vspace{-1\baselineskip}
\subfloat[]{{\includegraphics[width=\textwidth]{ood-ms/ood-1-vrnn.png} }}
\\ \vspace{-0.5\baselineskip} \noindent\rule{14cm}{0.4pt}
\subfloat[]{{\includegraphics[width=\textwidth]{ood-ms/ood-3-gt.png} }}
\\ \vspace{-1\baselineskip}
\subfloat[]{{\includegraphics[width=\textwidth]{ood-ms/ood-3-rnn.png} }}
\\ \vspace{-1\baselineskip}
\subfloat[]{{\includegraphics[width=\textwidth]{ood-ms/ood-3-vrnn.png} }}
\\ \vspace{-0.5\baselineskip} \noindent\rule{14cm}{0.4pt}
\subfloat[]{{\includegraphics[width=\textwidth]{ood-ms/ood-1-gt.png} }}
\\ \vspace{-1\baselineskip}
\subfloat[]{{\includegraphics[width=\textwidth]{ood-ms/ood-1-rnn.png} }}
\\ \vspace{-1\baselineskip}
\subfloat[]{{\includegraphics[width=\textwidth]{ood-ms/ood-1-vrnn.png} }}
\\ \vspace{-0.5\baselineskip} \noindent\rule{14cm}{0.4pt}
\subfloat[]{{\includegraphics[width=\textwidth]{ood-ms/ood-3-gt.png} }}
\\ \vspace{-1\baselineskip}
\subfloat[]{{\includegraphics[width=\textwidth]{ood-ms/ood-3-rnn.png} }}
\\ \vspace{-1\baselineskip}
\subfloat[]{{\includegraphics[width=\textwidth]{ood-ms/ood-3-vrnn.png} }}
\\ \vspace{-1\baselineskip}
\caption{}
\label{fig:ms}
\end{figure}
Here is the output
• Try just using skip= (I don't recommend 0pt), and remove the \vspaces. – Bernard Dec 24 '17 at 23:19
• i need the vspace to keep my large figure in 1 page – Kong Dec 24 '17 at 23:22
• With skip=0pt+ \vspace{-\baselineskip}, there are chances the figures touch each other. Is there any reason why some rows might not go to the following page? – Bernard Dec 24 '17 at 23:27
• i need all the subplots to be in a page and if i do not use \vspace some of them dont show up in the page. I tried \captionsetup[subfigure]{skip=-10pt} and some other numbers but i dont see any effect – Kong Dec 24 '17 at 23:31
• any easier solution ? – Kong Dec 27 '17 at 20:51
like this?
instead of \subfloat environments from subfig package i would rather use tabular and for rules use \midrule from booktabs package:
\documentclass{article}
\usepackage[demo]{graphicx}
\usepackage{caption}
\usepackage{booktabs}
%-------------------------------- show page layout, only for test
\usepackage{showframe}
\renewcommand\ShowFrameLinethickness{0.15pt}
\renewcommand*\ShowFrameColor{\color{red}}
%---------------------------------------------------------------%
\begin{document}
\begin{figure}[p]
\newcommand\insertimage[1]{\includegraphics[width=\textwidth,height=13mm]{#1}}
\centering
\begin{tabular*}{\textwidth}{@{}c@{}}
\insertimage{ood-ms/ood-1-gt.png} \\
Ground Truth \\
\insertimage{ood-ms/ood-1-rnn.png} \\
Recurrent Neural Network \\
\insertimage{ood-ms/ood-1-vrnn.png} \\
\midrule
\insertimage{ood-ms/ood-3-gt.png} \\
\insertimage{ood-ms/ood-3-rnn.png} \\
\insertimage{ood-ms/ood-3-vrnn.png} \\
\midrule
\insertimage{ood-ms/ood-1-gt.png} \\
\insertimage{ood-ms/ood-1-rnn.png} \\
\insertimage{ood-ms/ood-1-vrnn.png} \\
\midrule
• @kong, sorry, i don't understand you. i haven't your figure, so i use demo option. size of figures are determined on the same way as you dona in your mwe. – Zarko Dec 27 '17 at 21:36
|
2019-10-13 19:58:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6341182589530945, "perplexity": 7020.015474808712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986647517.11/warc/CC-MAIN-20191013195541-20191013222541-00484.warc.gz"}
|
https://math.paperswithcode.com/paper/p-torsion-etale-sheaves-on-the-jacobian-of-a
|
# $p$-torsion \'etale sheaves on the Jacobian of a curve
27 Sep 2019 Zhao Yifei
Suppose $X$ is a smooth, proper, geometrically connected curve over $\mathbb F_q$ with an $\mathbb F_q$-rational point $x_0$. For any $\mathbb F_q^{\times}$-character $\sigma$ of $\pi_1(X)$ trivial on $x_0$, we construct a functor $\mathbb L_n^{\sigma}$ from the derived category of coherent sheaves on the moduli space of deformations of $\sigma$ over the Witt ring $W_n(\mathbb F_q)$ to the derived category of constructible $W_n(\mathbb F_q)$-sheaves on the Jacobian of $X$... (read more)
PDF Abstract
# Code Add Remove Mark official
No code implementations yet. Submit your code now
# Categories
• ALGEBRAIC GEOMETRY
• NUMBER THEORY
|
2021-01-18 13:24:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7332282662391663, "perplexity": 515.9416014877233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514796.13/warc/CC-MAIN-20210118123320-20210118153320-00416.warc.gz"}
|
http://openstudy.com/updates/4fb7c985e4b05565342d409e
|
## anonymous 4 years ago (3x-8)^1/2=5
1. ParthKohli
$$\Large \color{MidnightBlue}{\Rightarrow \sqrt{3x - 8} = 5 }$$ Power of one half can be expressed as square root of that value. $$\Large \color{MidnightBlue}{\Rightarrow 3x - 8 = 25 }$$ $$\Large \color{MidnightBlue}{\Rightarrow 3x = 33 }$$ $$\Large \color{MidnightBlue}{\Rightarrow x = 11 }$$
2. ParthKohli
Got it? Wasn't it easy? =)
3. anonymous
oh okay. thank you.
|
2016-07-02 00:30:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33161306381225586, "perplexity": 5252.499799234068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00070-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://topospaces.subwiki.org/w/index.php?title=Two_sides_lemma&oldid=1944
|
# Two sides lemma
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
This term is nonstandard and is being used locally within the wiki. For its use outside the wiki, please define the term when using it.
## Statement
The inclusion of two adjacent sides in the unit square is equivalent to the inclusion of one side in the unit square, viz., in the inclusion of $I$ in $I \times I$.
An explicit isotopy can also be written down, which may be more convenient in some situations.
## Applications
• The inclusion of a point in the unit interval is a cofibration. To prove this, we note that this boils down to proving that two adjacent sides of the square are a retract of the unit square, which, by the two sides lemma, is equivalent to requiring that $I$ is a retract of $I \times I$ which is clearly true.
The retraction from $I \times I$ to two sides can also be written down explicitly; this is more useful at times.
|
2021-11-28 18:27:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8924422860145569, "perplexity": 299.94455710031013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358570.48/warc/CC-MAIN-20211128164634-20211128194634-00247.warc.gz"}
|
https://cs.stackexchange.com/questions/144845/maximum-weight-matching-with-a-bounded-number-of-fractional-edges
|
# Maximum-weight matching with a bounded number of fractional edges
In graphs with odd cycles, the maximum weight of a fractional matching may be higher than that of a standard matching. For example, in a cycle of length 3, where all edges have weight 1, the maximum-weight matching contains a single edge so its weight is 1, but the maximum-weight fractional matching contains 50% of each edge so its weight is 1.5. So, allowing some edges to be fractional can improve the total matching weight.
Suppose we want to allow only a limited number of edges to be fractional (e.g. at most three edges). What is a polynomial-time algorithm for finding a maximum-weight fractional matching with this constraint?
When the bound on the number of fractional edges is 0, the problem can be solved by Edmonds' algorithm in time $$O(n^2 m)$$ (where $$n$$ is the number of vertices and $$m$$ the number of edges). When the bound is $$m$$, the problem can be solved in polynomial time by solving a linear program. Based on this, I believe that for any limit between 0 and $$m$$, the problem should be solvable in polynomial time. But so far I could not find any polynomial-time algorithm.
EDIT: xskxzr commented on a paper by Bourjolly and Pulleyblank, which is indeed closely-related. Its focus is on minimum fractional vertex cover (min-FVC), which is the dual of maximum fractional matching (max-FM; the linear program of min-FVC is the dual of the linear program of max-FM). What I understood from their paper is the following:
• There is an algorithm (in Section 4) for finding a max-FM in a general graph in time $$O(|V| |E|)$$.
• The same algorithm finds sets of vertices $$V_0,V_1$$ that have a weight of $$0$$ ($$1$$) in any min-FVC.
• They can find a set $$F$$ of vertices that have a fractional weight in any min-FVC.
• They have an algorithm (in Section 5) for finding a min-FVC in which only the vertices of $$F$$ are fractional; therefore, the number of fractional vertices is minimized, subject to finding a globally-minimum FVC. The run-time, if I understand correctly, is $$O(|V| |E|)$$.
This raises two follow-up questions:
1. Suppose we are given an integer $$k$$, which is smaller than $$|F|$$, and we want to find an FVC with at most $$k$$ fractional vertices. The size of this FVC will, by definition, be larger than the min-FVC. Can we find an FVC of minimum cardinality, subject to the constraint of at most $$k$$ fractional vertices? Ideally, the run-time should not depend on $$k$$.
2. Is it possible to find a set of edges that must have a fractional weight in every max-FM? Is it possible to find a max-FM in which only these edges are fractional?
3. Is it possible to solve problem 1 for max-FM?
• Could you edit the question to specify all your requirements? Currently your question could be answered by giving an exponential-time algorithm and that would meet all stated requirements, but I doubt such an answer would be useful. Are you looking for a polynomial-time algorithm? Any algorithm that is faster than the best you already know of, and if so, what is the best algorithm you know of so far?
– D.W.
Oct 25, 2021 at 7:32
• @D.W. I clarified that the algorithm should be polynomial time. When the bound on the number of fractional edges is either 0 or $m$ (the total number of edges in the graph), it is well known that the problem can be solved in polynomial time, so I believe it should be polynomial also for any fixed number between 0 and $m$. But so far I could not find any such algorithm. Oct 26, 2021 at 9:26
• I think Bourjolly and Pulleyblank's algorithm should still work, though I don't verify it carefully. Nov 15, 2021 at 6:39
• @xskxzr thanks a lot, this paper is indeed very relevant! I added to the OP a summary of the paper, as I understood it. Nov 17, 2021 at 17:52
Let $$S$$ be the subset of edges that are "really fractional" (i.e., in (0, 1)) in the optimal solution. Since you only allow a constant number of edges to be fractional, you can try every possibility of $$S$$. Given $$S$$, let $$V(S)$$ denote the vertices incident to edges in $$S$$, you can then remove all edges incident to $$V(S)$$ but not in $$S$$. Now you can do maximum fractional matching on $$S$$ and maximum integral matching on edges not in $$S$$ seperately.
• It is indeed polynomial-time when the number of fractional vertices is considered a fixed parameter (not part of the input). But if the number of fractional vertices is, say, $m/4$, then the run time is exponential in $m$. But if the number of fractional vertices is $m$, that the problem is again polynomial-time - which seems strange. Oct 30, 2021 at 20:01
• @ErelSegal-Halevi It's not strange. Many problems have similar structure. For example, the independent set problem is easy to solve if $k$ is a constant or $n$ minus a constant. By the way, I guess your problem is NP-hard in general cases, but it's not that easy to prove. Oct 31, 2021 at 2:51
|
2022-08-16 06:54:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8079944252967834, "perplexity": 215.9263008643281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572221.38/warc/CC-MAIN-20220816060335-20220816090335-00637.warc.gz"}
|
https://gmatclub.com/forum/if-n-is-the-sum-of-the-first-40-positive-integers-what-is-the-greates-242230.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 15 Nov 2018, 23:43
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Events & Promotions
Events & Promotions in November
PrevNext
SuMoTuWeThFrSa
28293031123
45678910
11121314151617
18192021222324
2526272829301
Open Detailed Calendar
• Free GMAT Strategy Webinar
November 17, 2018
November 17, 2018
07:00 AM PST
09:00 AM PST
Nov. 17, 7 AM PST. Aiming to score 760+? Attend this FREE session to learn how to Define your GMAT Strategy, Create your Study Plan and Master the Core Skills to excel on the GMAT.
• GMATbuster's Weekly GMAT Quant Quiz # 9
November 17, 2018
November 17, 2018
09:00 AM PST
11:00 AM PST
Join the Quiz Saturday November 17th, 9 AM PST. The Quiz will last approximately 2 hours. Make sure you are on time or you will be at a disadvantage.
If n is the sum of the first 40 positive integers, what is the greates
Author Message
TAGS:
Hide Tags
Manager
Joined: 30 May 2017
Posts: 66
Concentration: Finance, General Management
GMAT 1: 690 Q47 V38
GPA: 3.23
If n is the sum of the first 40 positive integers, what is the greates [#permalink]
Show Tags
08 Jun 2017, 12:52
3
4
00:00
Difficulty:
25% (medium)
Question Stats:
71% (00:45) correct 29% (00:41) wrong based on 161 sessions
HideShow timer Statistics
If n is the sum of the first 40 positive integers, what is the greatest prime factor of n?
A. 29
B. 37
C. 41
D. 17
E. 19
_________________
Veritas Prep 6/18/17 600 Q:38 V:35 IR:5
Veritas Prep 6/29/17 620 Q:43 V:33 IR:4
Manhattan 7/12/17 640 Q:42 V:35 IR:2.4
Veritas Prep 7/27/17 640 Q:41 V:37 IR:4
Manhattan 8/9/17 670 Q:44 V:37 IR:3
Veritas Prep 8/21/17 660 Q:45 V:36 IR:7
GMAT Prep 8/23/17 700 Q:47 V:38 IR:8
GMAT Prep 8/27/17 730 Q:49 V:40 IR:8
Veritas Prep 8/30/17 690 Q:47 V:37 IR:8
Senior SC Moderator
Joined: 22 May 2016
Posts: 2095
If n is the sum of the first 40 positive integers, what is the greates [#permalink]
Show Tags
08 Jun 2017, 14:02
1
2
Smokeybear00 wrote:
If n is the sum of the first 40 positive integers, what is the greatest prime factor of n?
A. 29
B. 37
C. 41
D. 17
E. 19
Nice question.
Sum of consecutive integers =
$$\frac{(First Term + Last Term)*(n)}{2}$$, where n = number of terms
The first term is 1. The last term is 40.
Using the formula above, the sum is
$$\frac{(1 + 40)*(40)}{2}$$
=(41)(40) / 2
Stop there. 41 is a factor of the sum, and 41 is prime.
CEO
Joined: 11 Sep 2015
Posts: 3120
Re: If n is the sum of the first 40 positive integers, what is the greates [#permalink]
Show Tags
08 Jun 2017, 14:48
Top Contributor
Smokeybear00 wrote:
If n is the sum of the first 40 positive integers, what is the greatest prime factor of n?
A. 29
B. 37
C. 41
D. 17
E. 19
Another useful formula: 1 + 2 + 3 + 4 + 5 + . . . .+ k = (k)(k+1)/2
So, for example, 1 + 2 + 3 + 4 + .... + 10 = (10)(10 + 1)/2 = 110/2 = 55
n = 1 + 2 + 3 + 4 + 5 + . . . .+ 40
= (40)(40 + 1)/2
= (40)(41)/2
= (20)(41)
At this point, we can see that 41 will be the greatest prime factor of n.
For "fun" let's finish the prime factorization of n.
We left off at: n = (20)(41)
Continue to get: n = (2)(2)(5)(41)
Cheers,
Brent
_________________
Brent Hanneson – GMATPrepNow.com
Director
Joined: 13 Mar 2017
Posts: 630
Location: India
Concentration: General Management, Entrepreneurship
GPA: 3.8
WE: Engineering (Energy and Utilities)
Re: If n is the sum of the first 40 positive integers, what is the greates [#permalink]
Show Tags
31 Aug 2017, 05:39
Smokeybear00 wrote:
If n is the sum of the first 40 positive integers, what is the greatest prime factor of n?
A. 29
B. 37
C. 41
D. 17
E. 19
n = 1+2+3+4+..... + 40
= 40*41/2 = 20*41
So, greatest prime factor of n = 41
_________________
CAT 2017 99th percentiler : VA 97.27 | DI-LR 96.84 | QA 98.04 | OA 98.95
UPSC Aspirants : Get my app UPSC Important News Reader from Play store.
MBA Social Network : WebMaggu
Appreciate by Clicking +1 Kudos ( Lets be more generous friends.)
What I believe is : "Nothing is Impossible, Even Impossible says I'm Possible" : "Stay Hungry, Stay Foolish".
VP
Joined: 07 Dec 2014
Posts: 1113
Re: If n is the sum of the first 40 positive integers, what is the greates [#permalink]
Show Tags
29 Oct 2018, 14:35
Smokeybear00 wrote:
If n is the sum of the first 40 positive integers, what is the greatest prime factor of n?
A. 29
B. 37
C. 41
D. 17
E. 19
(40+1)/2=20.5 mean→
40*20.5=820 sum→
820=(40/2)(2*20.5)→
820=20*41
41
C
Senior Manager
Joined: 19 Oct 2013
Posts: 498
Location: Kuwait
GPA: 3.2
WE: Engineering (Real Estate)
Re: If n is the sum of the first 40 positive integers, what is the greates [#permalink]
Show Tags
29 Oct 2018, 14:41
Smokeybear00 wrote:
If n is the sum of the first 40 positive integers, what is the greatest prime factor of n?
A. 29
B. 37
C. 41
D. 17
E. 19
Sum of first positive integers = x* (x+1)/2
Where x is the number of integers.
We are given number of positive integers = 40
so n would be 40 * 41/2
41 is the largest prime.
Posted from my mobile device
Intern
Joined: 20 Sep 2018
Posts: 30
Re: If n is the sum of the first 40 positive integers, what is the greates [#permalink]
Show Tags
29 Oct 2018, 16:46
Smokeybear00 wrote:
If n is the sum of the first 40 positive integers, what is the greatest prime factor of n?
A. 29
B. 37
C. 41
D. 17
E. 19
Sum = 40 × 41 / 2 = 820
Answer is simply the greatest prime factor of 820.
41
Re: If n is the sum of the first 40 positive integers, what is the greates &nbs [#permalink] 29 Oct 2018, 16:46
Display posts from previous: Sort by
|
2018-11-16 07:43:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5174589157104492, "perplexity": 3268.103631476952}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742981.53/warc/CC-MAIN-20181116070420-20181116092420-00319.warc.gz"}
|
https://agentfoundations.org/item?id=349
|
Fixed point theorem in the finite and infinite case
discussion post by Victoria Krakovna 1057 days ago | Jim Babcock and Patrick LaVictoire like this | discuss
Janos and I started working on extending the fixed point theorem to the infinite case at MIRI’s June decision theory workshop with Patrick and Benja. This post attempts to exhibit the finite version of the theorem and speculates about a plausible extension.
# Algorithm for the finite case
Suppose we’re given variables $$p_1,\dots,p_n$$, and statements $$p_i\leftrightarrow \phi_i(p_1,\dots,p_n)$$ where each $$\phi_i$$ is fully modalized (no variables occur outside modal subformulas). We will describe an algorithm for constructing sentences $$\psi_i$$ with no free variables, such that the original statements are equivalent to $$p_1\leftrightarrow \psi_1,\dots,p_n\leftrightarrow \psi_n$$.
For simplification purposes, we will assume that each $$\phi_i$$ is only singly modalized (none of the modal subformulas contain further modal subformulas). If not, we can introduce new variables for each subformula of the form $$\square \phi_{i,j}$$ that occurs in a fully modalized context.
Now consider a sequence of theories $$W_0,W_1,\dots$$, where $$W_i\equiv PA\cup\{\square^{i+1} \bot,\lnot\square^i\bot\}$$.
In $$W_0$$, it’s easy to determine the truth value of each $$p_i$$: every modal subformula can be replaced with $$\top$$, leaving a propositional formula with no variables.
Now suppose we’ve done this for $$W_0,\dots,W_{n-1}$$. Then, a statement $$\square \phi$$ will be true in $$W_n$$ iff
$PA\vdash(\square^{n+1}\bot\land\lnot\square^n\bot)\to\square\phi \Leftrightarrow\square^n\bot\lor\square(\square^n \bot \rightarrow \phi) \Leftrightarrow\square^n\bot\lor\square\bigwedge_{i=0}^{n-1}((\square^{i+1} \bot\land \lnot\square^i \bot)\rightarrow \phi)$
Therefore $$\square \phi$$ will be true in $$W_n$$ iff $$\phi$$ is true in $$W_0,\dots,W_{n-1}$$; this will let us evaluate the truth value of $$p_i$$ in all of the theories $$W_1,\dots,W_n$$.
It’s clearly the case that every modal subformula will have truth value stabilizing, therefore every $$p_i$$ will also. So there is an $$N$$ such that $$W_n$$ has the same truth values as $$W_N$$ for $$n>N$$. Now if $$W_N\vdash p_i$$, construct $$\psi_i\equiv\lnot\bigvee_{i:W_i\vdash \lnot p_i} (\square^{i+1}\bot\land\lnot\square^i\bot)$$; otherwise construct $$\psi_i\equiv\bigvee_{i:W_i\vdash p_i} (\square^{i+1}\bot\land\lnot\square^i\bot)$$. These are variable-free formulas.
# Infinite case example: Procrastination Bot
def ProcrastinationBot$$_N$$(X):
if $$PA+N \vdash \square X(PB_{N+1}) = C$$:
return C
else: return D
Constructing the fixed point
Let $$p_{ij}$$ denote whether $$PB_i$$ cooperates with $$PB_j$$. Then $p_{ij} \leftrightarrow \square_i p_{j, i+1} =\square_i \square_j p_{i+1,j+1}$ where $$\square_i$$ stands for $$\lnot \square^i \bot \rightarrow \square$$ (e.g. $$\square_0 = \square$$).
Then the following statements hold $p_{00} \leftrightarrow\square \square p_{11}$ $p_{11} \leftrightarrow \square_1 \square_1 p_{22} \equiv \lnot \square \bot \rightarrow \square (\lnot \square \bot \rightarrow \square p_{22})$ $\dots$
In $$W_0$$, any statement with a box in front of it is true, so any statement where a boxed statement is implied is also true. Thus, all the relevant statements are true in $$W_0$$:
Theory $$p_{00}$$ $$\square \square p_{11}$$ $$p_{11}$$ $$\lnot \square \bot \rightarrow \square (\lnot \square \bot \rightarrow \square p_{22})$$ $$\lnot \square \bot \rightarrow \square p_{22}$$ $$p_{22}$$ $$\dots$$
$$W_0$$ $$\top$$ $$\top$$ $$\top$$
In $$W_{i+1}$$, all the implied statements are boxed statements that were true in $$W_i$$, e.g. $$\square p_{22}$$ or $$\lnot \square \bot \rightarrow \square p_{22}$$. Thus, all the relevant statements are true in $$W_{i+1}$$:
Theory $$p_{00}$$ $$\square \square p_{11}$$ $$p_{11}$$ $$\lnot \square \bot \rightarrow \square (\lnot \square \bot \rightarrow \square p_{22})$$ $$\lnot \square \bot \rightarrow \square p_{22}$$ $$p_{22}$$ $$\dots$$
$$W_0$$ $$\top$$ $$\top$$ $$\top$$ $$\top$$ $$\top$$ $$\top$$
$$\vdots$$ $$\vdots$$ $$\vdots$$ $$\vdots$$ $$\vdots$$ $$\vdots$$ $$\vdots$$
$$W_{i+1}$$ $$\top$$ $$\top$$ $$\top$$
Thus, any Procrastination Bot cooperates with any other Procrastination Bot.
Existence and uniqueness of the fixed point via quining
Given a formula $$\psi(m,n)$$, there exists formula $$\phi(n)$$ such that $$\vdash\phi(n)\leftrightarrow\psi(\ulcorner\phi\urcorner,n)$$. The quine construction (thanks Benja!) is $\phi(n)\equiv\psi(\mbox{let }k=\ulcorner\psi(\mbox{let }k=x\mbox{ in }\operatorname{subst}_{\ulcorner x\urcorner}(\operatorname{quote}(k),k))\urcorner\mbox{ in }\operatorname{subst}_{\ulcorner x\urcorner}(\operatorname{quote}(k),k),n)$
Let $$\psi(\ulcorner \phi \urcorner, n) = \square\ulcorner \phi(\overline{n+1}) \urcorner$$. Then we have $PA \vdash \forall n: (\phi(n) \leftrightarrow \square\ulcorner \phi(\overline{n+1}) \urcorner).$
Now consider that $$\square \ulcorner \forall n: \phi(n) \urcorner \rightarrow \square\ulcorner \phi(\overline{n+1}) \urcorner$$. Using the above, we have $PA \vdash \square \ulcorner \forall n: \phi(n) \urcorner \rightarrow \forall n: \phi(n).$
By Lob’s theorem, $$PA \vdash \forall n: \phi(n)$$, so a fixed point exists.
Assume there are two fixed points $$\phi$$ and $$\phi'$$. Then we have $PA \vdash \forall n: [\phi(n) \leftrightarrow \psi(\ulcorner \phi \urcorner,n)] \land [\phi'(n) \leftrightarrow \psi(\ulcorner \phi' \urcorner,n)],$ $PA \vdash \forall \ulcorner \phi \urcorner, \ulcorner \phi' \urcorner: \square \ulcorner \forall n: \phi(n) \leftrightarrow \phi'(n) \urcorner \rightarrow [\psi(\ulcorner \phi \urcorner, n) \leftrightarrow \psi(\ulcorner \phi' \urcorner, n)]$ Thus, $$PA \vdash \forall n: \phi(n) \leftrightarrow \phi'(n)$$, so the fixed points are the same.
### NEW DISCUSSION POSTS
Note: I currently think that
by Jessica Taylor on Predicting HCH using expert advice | 0 likes
Counterfactual mugging
by Jessica Taylor on Doubts about Updatelessness | 0 likes
What do you mean by "in full
by David Krueger on Doubts about Updatelessness | 0 likes
It seems relatively plausible
by Paul Christiano on Maximally efficient agents will probably have an a... | 1 like
I think that in that case,
by Alex Appel on Smoking Lesion Steelman | 1 like
by Sam Eisenstat on No Constant Distribution Can be a Logical Inductor | 1 like
A: While that is a really
by Alex Appel on Musings on Exploration | 0 likes
> The true reason to do
by Jessica Taylor on Musings on Exploration | 0 likes
by Vadim Kosoy on Musings on Exploration | 1 like
I'm not convinced exploration
by Abram Demski on Musings on Exploration | 0 likes
Update: This isn't really an
by Alex Appel on A Difficulty With Density-Zero Exploration | 0 likes
If you drop the
by Alex Appel on Distributed Cooperation | 1 like
Cool! I'm happy to see this
by Abram Demski on Distributed Cooperation | 0 likes
Caveat: The version of EDT
by 258 on In memoryless Cartesian environments, every UDT po... | 2 likes
[Delegative Reinforcement
by Vadim Kosoy on Stable Pointers to Value II: Environmental Goals | 1 like
|
2018-05-28 07:34:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7358028888702393, "perplexity": 1062.1903445101486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794872114.89/warc/CC-MAIN-20180528072218-20180528092218-00244.warc.gz"}
|
https://gmatclub.com/forum/the-figure-above-shows-a-circle-inscribed-in-a-square-which-is-in-turn-280329.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 20 Nov 2018, 20:48
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
Your Progress
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Events & Promotions
Events & Promotions in November
PrevNext
SuMoTuWeThFrSa
28293031123
45678910
11121314151617
18192021222324
2526272829301
Open Detailed Calendar
• All GMAT Club Tests are Free and open on November 22nd in celebration of Thanksgiving Day!
November 22, 2018
November 22, 2018
10:00 PM PST
11:00 PM PST
Mark your calendars - All GMAT Club Tests are free and open November 22nd to celebrate Thanksgiving Day! Access will be available from 0:01 AM to 11:59 PM, Pacific Time (USA)
• Free lesson on number properties
November 23, 2018
November 23, 2018
10:00 PM PST
11:00 PM PST
Practice the one most important Quant section - Integer properties, and rapidly improve your skills.
The figure above shows a circle inscribed in a square which is in turn
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 50711
The figure above shows a circle inscribed in a square which is in turn [#permalink]
Show Tags
31 Oct 2018, 00:37
00:00
Difficulty:
25% (medium)
Question Stats:
76% (01:32) correct 24% (00:22) wrong based on 45 sessions
HideShow timer Statistics
The figure above shows a circle inscribed in a square which is in turn inscribed within a larger circle. What is the ratio of the area of the larger circle to that of the smaller circle?
A. √2
B. π/2
C. π^2/(4√2)
D. 2
E. π/√2
Attachment:
phd01.png [ 13.19 KiB | Viewed 303 times ]
_________________
Manager
Joined: 09 Jun 2018
Posts: 152
Location: United States
GPA: 3.95
WE: Manufacturing and Production (Energy and Utilities)
Re: The figure above shows a circle inscribed in a square which is in turn [#permalink]
Show Tags
31 Oct 2018, 06:27
Small circle radius = r,
Half of diagonal of square = sqrt(r^2 + r^2) = $$\sqrt{2}$$r = Radius of larger circle
Ratio = pi*$$(sqrt(2)r)^2$$/pi*$$r^2$$ = 2
Option D
_________________
If you found this relevant and useful, please Smash that Kudos button!
Intern
Joined: 06 Oct 2017
Posts: 25
Re: The figure above shows a circle inscribed in a square which is in turn [#permalink]
Show Tags
20 Nov 2018, 13:49
nkin wrote:
Small circle radius = r,
Half of diagonal of square = sqrt(r^2 + r^2) = $$\sqrt{2}$$r = Radius of larger circle
Ratio = pi*$$(sqrt(2)r)^2$$/pi*$$r^2$$ = 2
Option D
I'm a little lost. I thought the diagonal of a square was d=s√2 so wouldn't d=s√2/2 be the radius of the larger circle ? Where did you get sqrt(r^2 + r^2)
Please help me understand where I'm not seeing what's obvious
Re: The figure above shows a circle inscribed in a square which is in turn &nbs [#permalink] 20 Nov 2018, 13:49
Display posts from previous: Sort by
The figure above shows a circle inscribed in a square which is in turn
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2018-11-21 04:48:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5353074073791504, "perplexity": 6262.419923647199}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039747024.85/warc/CC-MAIN-20181121032129-20181121054129-00234.warc.gz"}
|
http://www.complexity-explorables.org/explorables/hopfed-turingles/
|
EXPLORABLES
With this explorable you can discover a variety of spatio-temporal patterns that can be generated with a very famous and simple autocatalytic reaction diffusion system known as the Gray-Scott model. In the model two substances $$U$$ and $$V$$ interact and diffuse in a two-dimensional container. Although only two types of simple reactions occur, the system generates a wealth of different stable and dynamic spatio-temporal patterns depending on system parameters.
Press play and observe different patters by selecting preset parameter values for the supply rate of $$U$$ (parameter $$F$$) and the decay rate of $$V$$ (parameter $$k$$). The panel depicts the concentration $$u(\mathbf{x},t)$$ of substance $$U$$ as a function of position $$\mathbf{x}$$ and time $$t$$.
Keep on reading to learn a bit more about what's going on.
## This is how it works
Two reactions take place in the system. First, when a single $$U$$-particle encounters two $$V$$ particles it is converted into $$V$$ itself:
$U+2V\rightarrow 3V$
increasing the amount of $$V$$ in the system (and decreasing the amount of $$U$$). Also, the amount of substance $$V$$ is decreased at rate $$k$$ by spontaneous decay into some inert substance $$P$$:
$V\xrightarrow{k} P$
In addition, substance $$U$$ is uniformly supplied at a constant rate $$F$$. $$U$$-, $$V$$-, and $$P$$-particles are removed at rate $$F$$, proportional to their concentration, keeping the total density of particles fixed. If we denote the spatial concentrations of $$U$$ and $$V$$ by $$u(\mathbf{x},t)$$ and $$v(\mathbf{x},t)$$, the dynamics is governed by the equations:
$\partial_t{u}=-uv^2+F(1-u)+ D_u \nabla^2 u$
$\partial_t{v}=uv^2-(F+k)v + D_v \nabla^2 v$
The last terms account for the spatial diffusion of $$U$$ and $$V$$ particles, the constants $$D _{u,v}$$ denote the diffusion constants of sustances $$U,V$$ and $$\nabla^2=\partial_x^2+\partial_y^2$$ is the Laplacian.
The solutions to these equations are shown in the display panel when you press play. Initially the system is set up with randomly placed reactangles of different values for $$u$$ and $$v$$. With the tangent and normal sliders you can navigate to different parameter values of $$F$$ and $$k$$ in the dynamically most interesting region (red sausage).
Sometimes the system will go into a uniform state, in this case you can press the reset button (triangle pointing to the left).
### Stationary states, Instabilities & Bifurcations
When looking at the possible stationary states, we see that when $$v=0,u=1$$ everywhere we have $$\dot{u}=\dot{v}=0$$ and nothing changes. This uniform state is stationary. This makes sense, because if substance $$V$$ is absent it cannot be made. This solution always exists and is always stable with respect to small perturbations.
We can also imagine a spatially uniform state in which the supply of $$U$$, the autocatalysis of $$V$$ and the decay of $$V$$ balance, so both $$U$$ and $$V$$ balance at some nonzero concentration.
In fact, if one does the math, one can show that to the left of the solid black line in the parameter space shown in the control panel, three stationary states exist including the trivial one. Above the dashed line, one of the two is stable and the other unstable. When crossing the dashed line from above the non-trivial stable state also loses stability by something called a Hopf bifurcation.
When we look beyond spatially homogeneous solutions, additional interesting things happen. Even in regions that would exhibit stable stationary states in a well-mixed system, the diffusion in the system can destabilize the uniform state by the Turing mechanism and spatial structure spontaneously emerges.
The complexity of the patterns of the Gray-Scott model emerges because in a narrow range in parameter space a Hopf bifurcation, a saddle node bifurcation and Turing instabilities are entangled.
## Try this
You can try to discover new patterns by starting at a preset pattern and gently move the sliders. Often, very different patterns can exist in close proximity in parameter space.
|
2019-09-16 08:04:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7691249847412109, "perplexity": 473.64780028305563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572516.46/warc/CC-MAIN-20190916080044-20190916102044-00390.warc.gz"}
|
https://math.stackexchange.com/questions/2389074/multivariable-derivative-limit-definition
|
Multivariable derivative: Limit definition
Consider a path through a domain in $\mathbb{R}^2$ given by $\mathbf{c}(t) = (x(t), y(t))$. We wish to find the rate of change of a function $f(x,y)$ along this path. Therefore, we wish to compute $\frac{d}{dt}f(\mathbf{c}(t))$. My question is about the limit definition. My book gives the limit definition of the derivative as
$$\frac{d}{dt}f(\mathbf{c}(t)) = \lim_{h\to\ 0}\frac{f(x(t+h),y(t+h)) - f(x(t),y(t))}{h}\tag{1}$$
Question
Why isn't the derivative written as: $$\frac{d}{d\mathbf{c}'(t)}f(\mathbf{c}(t)) = \lim_{h\to\ 0}\frac{f(x(t+h),y(t+h)) - f(x(t),y(t))}{\sqrt{(x(t+h)-x(t))^2 + (y(t+h) - y(t))^2}}\tag{2}$$ $$\frac{d}{d\mathbf{c}'(t)}f(\mathbf{c}(t)) = \lim_{\Delta x,\Delta y\to\ 0}\frac{f(x(t) + \Delta x,y(t) + \Delta y) - f(x(t),y(t))}{\sqrt{\Delta x^2 + \Delta y^2}}$$ $$\Delta x = x(t+h) - x(t) \\ \Delta y = y(t+h) - y(t)$$
where $d/d\mathbf{c}'(t)$ indicates the derivative in the direction of the tangent vector of the path. After reading the comments and responses under this question, my answer to my own question (with their help) is that these two limits speak to different derivatives. The first limit indicates the rate of change of $f$ as the parameter $t$ is changed. It indicates change in $f$ per unit $t$ ($t$ usually stands for time, but as we'll see below, it's just an arbitrary parameter. $t$ doesn't have to be 'time'). This derivative will depend on how you parameterize your path. A path can be parameterized in infinitely many ways ($\mathbf{c}_2(t)$ might move along your path twice as fast as $\mathbf{c}_1(t)$ for instance).
On the other hand, the second limit is simply the derivative of $f(x,y) = f(\mathbf{c}(t))$. It's just the derivative of the outside function with respect to the inside variables (instead of the derivative of the outside function with respect to the inside-inside variable - remember that if $f(x) = f(g(t))$, $f$ can either change directly through $x$, or indirectly through $t$. Because each case changes the function either by changing the $x$ or $t$ knob, we can ask for either $d/dx$ or $d/dt$). But back to equation $\textbf{(2)}$, since the inside variables $(x,y)$ form a path $\mathbf{c}(t)$, I don't write $\partial/\partial x$ or $\partial /\partial y$, but $d/d\mathbf{c}'(t)$ since $x,y$ are confined to change along a path. As you take the limit, you see that the derivative is in a direction tangent to your path, which is why I use $d/d\mathbf{c}'(t)$ since $\mathbf{c}'(t)$ is the tangent vector to the path. Again, the derivative represents the rate of change with respect to tangent lines to this path (you can see the direction that this derivative is taken in at a given $t$ by looking at the right hand side of equation $\textbf{(2)}$). This derivative indicates change in $f$ per unit change in tangent direction. This derivative will depend on your parameterization.
Comparisons to the 1-variable case: The first limit (1)
Let's discuss the first limit (the derivative of $f$ with respect to the 'inside-inside' variable. How does $f$ change indirectly through $t$). Is there a 1D analog? Yes. Consider the composite function $f(x) = f(x(t))$. What's the derivative of $f$ with respect to $t$? We can write the limit definition:
$$\frac{df(x(t))}{dt} = \lim_{h\to\ 0}\frac{f(x(t+h))-f(x(t))}{h}\tag{3}$$ This is indeed the 1D version of the first limit above $\textbf{(1)}$. To further drive the comparison, we know that $\frac{df}{dt} = f'(x(t))x'(t) = (df/dx)(dx/dt)$ = derivative of the outside times derivative of the inside. And in the multivariate case, that first limit can be shown to equal $\nabla f \cdot \mathbf{c}'(t)$ = derivative of the outside times derivative of the inside and do a sum for each variable. Indeed, the multivariate chain rule is in fact the generalization of the 1D chain rule. $f(x(t))$ is a composite function. But so to is $f(x(t),y(t)) = f(\mathbf{c}(t))$, just in a multivariate way. $\mathbf{c}(t)$ parameterizes how you move along the $x$ and $y$ axes. In the single variable case $f(x(t))$, we are parameterizing how we move along the only axis that $f$ has access to, which is the $x$ axis ($x = x(t)$). We usually see composite functions as $f(g(x)) = f(u)$, in which case we are parameterizing how $f$ moves along the $u$ axis with parameterization $g(x)$ (I'm just using different variables to show that a parameter is a parameter - It doesn't have to be time). Again this derivative of the 'inside-inside' variable will depend on your parameterization, which we can clearly see by looking at the 1D case (just take simple examples and see how $x'(t)$ changes). For instance, in 1D the only path you have to parameterize is the real line. That's the only path (let the path be the whole real line). Therefore different parameterizations will have different $x'(t)$. That is, different parameterizations will have different 'speeds' (again note that if $x$ is parameterized by $x(t)$, $x' = x'(t)$ is only a 'speed' if $x$ has units of meters and $t$ units of time. By 'speed' of the parameterization, I just mean derivative. To make this clearer, let $f(u) = f(g(k))$. The speed of the parameterization is $du/dk = g'(k)$). The change in $f$ per unit parameter will depend on how your path is parameterized. If $f$ is in temperature units Kelvin, $df(x(t))/dt$ could be 2 Kelvin per second for one parameterization and 3 Kelvin per second for another.
Comparisons to the 1-variable case: The second limit (2)
Let's look at the 1-variable case $f(u) = f(g(t))$. What is $df/du$? This is just the derivative of the outside function with respect to the inside function (it's how $f$ changes directly through it's domain variables as opposed to indirectly through the parameter). It's limit definition is given by
$$\frac{df(u)}{du} = \lim_{k\to\ 0}\frac{f(g(t) + k) - f(g(t))}{k}$$ which setting $k = g(t+h) - g(t)$ gives
$$\frac{df(u)}{du} = \lim_{h\to\ 0}\frac{f(g(t+h)) - f(g(t))}{g(t+h) - g(t)}\tag{4}$$ Either way hopefully you can get to this line without going through the first. All you're doing is taking the function at two different values and dividing by the difference. This is the 1D analog of the second limit $\textbf{(2)}$. The main difference is that in 1D, I can only move along my parameterization, which here is the $u$ axis. Therefore I write $d/du$. In the multivariate case, again I can only move along my parameterization. However, as we take a small step size $\Delta x$ and $\Delta y$ allowed by my parameterization that we let go to zero (by looking at $\textbf{(2)}$ hopefully you can see that $h \to 0$ is equivalent to $\Delta x \to 0$ or $\Delta y \to 0$), we are finding the rate of change of the function $f$ in the tangent direction to the path (think of two points on your path $\mathbf{c}(t)$. Fix one and allow the other to approach it. The two points form a tangent line in the limit). Therefore, I use $d/d\mathbf{c}'(t)$ to denote direction of the derivative. It's always a direction tangent to the path and that tangent direction changes over the path. In 1D, the direction tangent to the path was just the path itself (the axis), for the whole path. This derivative $\textbf{(2)}$ and $\textbf{(4)}$) will depend on your parameterization, which we can see by taking simple examples and looking at the 1D case $\textbf{(4)}$. Equations $\textbf{(2)}$ and $\textbf{(4)}$ give the rate of change of $f$ along the tangent direction to the path (which depends on your parameterization). Equations $\textbf{(1)}$ and $\textbf{(3)}$ give the rate of change of $f$ with respect to the parameter (which also depends on your parameterization).
Comments/Complaints on the Directional Derivative
A pure directional derivative is given by equation $\textbf{(2)}$ where $\mathbf{c}(t)$ is a line. That's it, end of story. For a vector $\vec{v} = (v_x, v_y)$, a parameterization along this direction is $x(t) = x + v_xt$ and $y(t) = y + v_yt$. Therefore, the directional derivative, given by equation $\textbf{(2)}$ is:
$$\frac{df(x(t),y(t))}{d\vec{v}} = \lim_{h\to 0} = \frac{f(x(t) + v_xh, y(t) + v_yh) - f(x(t), y(t))}{h}$$ $\textbf{IF}$ $\vec{v}$ is a unit vector so that $\sqrt{v_x^2 + v_y^2}$ = 1. However, textbooks will sometimes define this as the directional derivative even if the vector $\vec{v}$ is not a unit vector. I'm not a fan of this because that's not what equation $\textbf{(2)}$ says to do (and I find that $\textbf{(2)}$ makes sense). Essentially what they've done is $\textbf{redefine}$ what it means to be a unit vector. This is fine, but now we take $\vec{v}$ whatever it is, to be the unit vector standard. Therefore, their derivative which is still a rate of change of $f$ per unit distance, has a different meaning of 'per unit distance' then my definition of per unit distance based on my idea of a unit vector. The reason why they do this is because they understand what they are doing (redefining what a unit vector means - which is fine - but for 1st time learners, I think it can be confusing if you don't show that the directional derivative is essentially equation $\textbf{(2)}$.)
• Well, $d/dt$ is a time derivative, so you'd want $\Delta t = h$ in the denominator, don't you? – Hans Lundmark Aug 10 '17 at 13:36
• @HansLundmark Oh so maybe I should have wrote $\frac{d}{d\hat{u}}f(c(t))|_{t = t_0}$ where $\hat{u}$ is the unit tangent direction at $t = t_0$? But does what I'm saying under "My thoughts" make sense? – DWade64 Aug 10 '17 at 13:42
• Are you familiar with the concept of directional derivative? That's what you get if you take a curve with speed $1$. – Hans Lundmark Aug 10 '17 at 13:45
• @HansLundmark Yes the directional derivative is the case where $\mathbf{c}(t)$ is a straight line. A straight line can be parameterized by many different slopes. But we use the word directional derivative mainly for unit vectors. Therefore the directional derivative yields "rate of change of f per unit distance along line." But you can have other parameterizations. I guess why I'm confused is because in single variable calc, you never say x = x(t). If you were to write this, it was always x = t. Therefore derivatives are always 'with respect to unit distance.' – DWade64 Aug 10 '17 at 13:53
• Even in one variable, you could have a “curve on the line”, it's just that such curves are not as interesting as curves in the plane or in space... Anyway, the point of defining the time derivative along a curve is to take the curve's speed into consideration. If you want change per unit distance instead of change per unit time, you already have the directional derivative. – Hans Lundmark Aug 10 '17 at 14:05
The first definition could be well-defined even if the path $\bf c$ is not differentiable at some point. For instance put $f(x,y)=x+y$, $\mathbf c(t)=\begin{cases} (0,-t) & \text{if$t\le0$}\\ (t,0) & \text{if$t\ge0$} \end{cases}$. This path is not differentiable at $0$, but the first definition still applies to obtain $$\frac{df(\mathbf c(t))}{dt}\Bigr|_{t = 0}=1.$$
There is this guy flying over the $(x,y)$-plane, and he is continuously measuring the outside temperature $f(x,y)$. A standard apparatus with a rotating drum, as it is widely used in labs and zoos, would plot the measured temperature against time, i.e., would produce an ink plot of the function $$t\mapsto T(t)=f\bigl({\bf c}(t)\bigr)\ .$$ The derivative of this function is given in your first formula, and is the slope of the ink plot at the point $\bigl(t,T(t)\bigr)$. The resulting slope value not only depends on the point ${\bf c}(t)$ and the temperature conditions there, but also on the momentaneous tachometer speed of the aircraft.
On the other hand, the temperature curve $s\mapsto T(s)$ you are envisaging in your second approach does not depend on the tachometer speed of the aircraft, but only on its compass direction. In other words: It is the curve felt in an aircraft flying with constant speed $1$ along the same route.
• This is perfect. I'm glad I was thinking the right way. – Faraad Armwood Aug 10 '17 at 18:58
• Thanks. If $T(t)$ is a plot of temperature over time, this means that the plane is stationary at one point? So how does $T(t) = f(\mathbf{c}(t))$? – DWade64 Aug 10 '17 at 20:31
• @DWade64: The pilot has an apparatus on board with a rotating drum. I'm sure you have seen such a device before. – Christian Blatter Aug 11 '17 at 7:33
I think the best way to clear this up is to view the composition as a new function i.e $g(t)=(f \circ c)(t)$ rather than $f(c(t))$. Then it becomes clear that,
\begin{align*} \frac{d}{dt} \Bigr|_{t = t_0} f(c(t)) = \frac{d}{dt}\Bigr|_{t=t_0} g(t) &= \lim_{h \to 0} \frac{g(t_0+h) - g(t_0)}{h} \\ \\ &= \lim_{h \to 0} \frac{f(c(t_0+h)) - f(c(t_0))}{h} \\ \\ & = \lim_{h\to 0} \frac{f(x(t_0+h), y(t_0+h)) - f(x(t_0), y(t_0))}{h}\end{align*}
The first way will give the definition in terms of viewing $Df(c(t_0))$ as a linear map. In the case where $f: \mathbb{R}^m \to \mathbb{R}^1$ then $Df(c(t_0)) = \nabla f(c(t_0))$ where,
$$Df(c(t_0)) \cdot \begin{pmatrix} x(t_0+h) - x(t_0):= \Delta x) \\ y(t_0+h) - y(t_0):= \Delta y \end{pmatrix} + \epsilon = f(c(t_0+h)) - f(c(t_0))$$
The differential satisfies the limit you want i.e,
$$\lim_{(\Delta x, \Delta y) \to (0,0)} \frac{\left\|f(c(t_0+h))- f(c(t_0)) - Df(c(t_0)) \begin{pmatrix} \Delta x \\ \Delta y\end{pmatrix}\right\|}{\sqrt{\Delta x^2 + \Delta y^2}} = 0$$
and this is how you see $\Delta f \approx df$.
• Aarmwood Won't your $\epsilon$ justify using $=$ instead of $\approx$ in the first case? – Steven Thomas Hatton Aug 11 '17 at 16:35
• I think you also meant $\Delta f \approx df$ rather than $\nabla f \approx df$. – Steven Thomas Hatton Aug 11 '17 at 16:39
• @StevenHatton: Yes on the $\epsilon$ and thank you for pointing out the typo. – Faraad Armwood Aug 11 '17 at 16:49
|
2019-10-23 06:52:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9463458061218262, "perplexity": 218.9108735418928}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987829458.93/warc/CC-MAIN-20191023043257-20191023070757-00534.warc.gz"}
|
https://www.jobilize.com/trigonometry/course/2-3-models-and-applications-equations-and-inequalities-by-openstax
|
# 2.3 Models and applications
Page 1 / 9
In this section you will:
• Set up a linear equation to solve a real-world application.
• Use a formula to solve a real-world application.
Josh is hoping to get an A in his college algebra class. He has scores of 75, 82, 95, 91, and 94 on his first five tests. Only the final exam remains, and the maximum of points that can be earned is 100. Is it possible for Josh to end the course with an A? A simple linear equation will give Josh his answer.
Many real-world applications can be modeled by linear equations. For example, a cell phone package may include a monthly service fee plus a charge per minute of talk-time; it costs a widget manufacturer a certain amount to produce x widgets per month plus monthly operating charges; a car rental company charges a daily fee plus an amount per mile driven. These are examples of applications we come across every day that are modeled by linear equations. In this section, we will set up and use linear equations to solve such problems.
## Setting up a linear equation to solve a real-world application
To set up or model a linear equation to fit a real-world application, we must first determine the known quantities and define the unknown quantity as a variable. Then, we begin to interpret the words as mathematical expressions using mathematical symbols. Let us use the car rental example above. In this case, a known cost, such as $0.10/mi, is multiplied by an unknown quantity, the number of miles driven. Therefore, we can write $\text{\hspace{0.17em}}0.10x.\text{\hspace{0.17em}}$ This expression represents a variable cost because it changes according to the number of miles driven. If a quantity is independent of a variable, we usually just add or subtract it, according to the problem. As these amounts do not change, we call them fixed costs. Consider a car rental agency that charges$0.10/mi plus a daily fee of \$50. We can use these quantities to model an equation that can be used to find the daily car rental cost $\text{\hspace{0.17em}}C.$
$C=0.10x+50$
When dealing with real-world applications, there are certain expressions that we can translate directly into math. [link] lists some common verbal expressions and their equivalent mathematical expressions.
Verbal Translation to Math Operations
One number exceeds another by a $x,\text{}\text{\hspace{0.17em}}x+a$
Twice a number $2x$
One number is a more than another number $x,\text{}\text{\hspace{0.17em}}x+a$
One number is a less than twice another number $x,\text{\hspace{0.17em}}2x-a$
The product of a number and a , decreased by b $ax-b$
The quotient of a number and the number plus a is three times the number $\frac{x}{x+a}=3x$
The product of three times a number and the number decreased by b is c $3x\left(x-b\right)=c$
Given a real-world problem, model a linear equation to fit it.
1. Identify known quantities.
2. Assign a variable to represent the unknown quantity.
3. If there is more than one unknown quantity, find a way to write the second unknown in terms of the first.
4. Write an equation interpreting the words as mathematical operations.
5. Solve the equation. Be sure the solution can be explained in words, including the units of measure.
## Modeling a linear equation to solve an unknown number problem
Find a linear equation to solve for the following unknown quantities: One number exceeds another number by $\text{\hspace{0.17em}}17\text{\hspace{0.17em}}$ and their sum is $\text{\hspace{0.17em}}31.\text{\hspace{0.17em}}$ Find the two numbers.
Let $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ equal the first number. Then, as the second number exceeds the first by 17, we can write the second number as $\text{\hspace{0.17em}}x+17.\text{\hspace{0.17em}}$ The sum of the two numbers is 31. We usually interpret the word is as an equal sign.
The two numbers are $\text{\hspace{0.17em}}7\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}24.$
A laser rangefinder is locked on a comet approaching Earth. The distance g(x), in kilometers, of the comet after x days, for x in the interval 0 to 30 days, is given by g(x)=250,000csc(π30x). Graph g(x) on the interval [0, 35]. Evaluate g(5) and interpret the information. What is the minimum distance between the comet and Earth? When does this occur? To which constant in the equation does this correspond? Find and discuss the meaning of any vertical asymptotes.
The sequence is {1,-1,1-1.....} has
how can we solve this problem
Sin(A+B) = sinBcosA+cosBsinA
Prove it
Eseka
Eseka
hi
Joel
June needs 45 gallons of punch. 2 different coolers. Bigger cooler is 5 times as large as smaller cooler. How many gallons in each cooler?
7.5 and 37.5
Nando
find the sum of 28th term of the AP 3+10+17+---------
I think you should say "28 terms" instead of "28th term"
Vedant
the 28th term is 175
Nando
192
Kenneth
if sequence sn is a such that sn>0 for all n and lim sn=0than prove that lim (s1 s2............ sn) ke hole power n =n
write down the polynomial function with root 1/3,2,-3 with solution
if A and B are subspaces of V prove that (A+B)/B=A/(A-B)
write down the value of each of the following in surd form a)cos(-65°) b)sin(-180°)c)tan(225°)d)tan(135°)
Prove that (sinA/1-cosA - 1-cosA/sinA) (cosA/1-sinA - 1-sinA/cosA) = 4
what is the answer to dividing negative index
In a triangle ABC prove that. (b+c)cosA+(c+a)cosB+(a+b)cisC=a+b+c.
give me the waec 2019 questions
|
2019-06-27 00:25:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 16, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6534655690193176, "perplexity": 766.5631216183618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000609.90/warc/CC-MAIN-20190626234958-20190627020958-00095.warc.gz"}
|
https://math.stackexchange.com/questions/467586/why-is-the-partial-derivative-of-this-fuction-locally-bounded
|
# Why is the partial derivative of this fuction locally bounded?
We have a function for $x_i, t_i >0$
$$|f(x_1,t_1)-f(x_0,t_0)| \leq C (|t_1-t_0|^{1/2} + |x_1-x_0|)$$
Why does this mean $f_t$ is locally bounded?
$f$ is non-increasing and convex in $x$. $f$ is non decreasing in $t$. Also $f$ is bounded below by 0 and above by a constant.
We have $$f_t(x,s) = \lim\limits_{u\rightarrow s} \dfrac{|f(x,u) - f(x,s)|}{u-s}$$
The bound only tells us
$$\lim\limits_{u\rightarrow s} \dfrac{|f(x,u) - f(x,s)|}{|u-s|}\leq C|u-s|^{-1/2}$$
Which can blow up.
• Is that $x_2$ up there meant to be an $x_0$? – Patrick Da Silva Aug 14 '13 at 16:55
• @PatrickDaSilva yes. thanks – Lost1 Aug 14 '13 at 17:00
• Why are you sure $f_t$ is locally bounded? I am not convinced either. – Patrick Da Silva Aug 14 '13 at 17:03
• @PatrickDaSilva because a paper says so. I will try to see if I missed any conditions. – Lost1 Aug 14 '13 at 17:03
• Papers can seem wrong if you read them wrong, or sometimes papers can be wrong. You should figure out if something weird is happening in there, I'm trying to think of a counter example ; I don't believe your conditions are sufficient. – Patrick Da Silva Aug 14 '13 at 17:05
|
2019-10-21 09:54:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7429853677749634, "perplexity": 576.6733017162708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987769323.92/warc/CC-MAIN-20191021093533-20191021121033-00322.warc.gz"}
|
http://log4think.com/category/translation/
|
# 在Android中使用OSGi框架(Apache Felix)
##Dalvik VM
Android允许开发者使用Java开发应用,但出于某些原因,代码实际是运行在名为Dalvik的一个针对移动设备平台的虚拟机上,而不是标准的Java虚拟机。Dalvik并不使用标准的Java字节码格式,而是使用Android SDK中的一个工具dx将由Java编译出来的类文件转换为另外一种类文件格式(.dex格式)。这个转换是在编译期完成的。
##准备Bundles
*第一步:* 每一个用到的Jar文件,无论是Felix库还是你自己写的Bundle,都需要包含对应的DEX。也就说,需要为jar文件创建对应的dex文件:
dx --dex --output=classes.dex JAR_file.jar
*第二步:* 将处理过的jar文件传到模拟器(或真机)中:
*第三步:* 以演示代码为例,准备Felix的jar文件和Bundle的jar文件:
osgi-android: /
\- bin
\- bundle
\- conf
\- felix.sh
export PATH=
# 统一业务模型(UBM) in ERP5
## 统一业务模型中的五个概念是什么
UBM包含5个概念,分别是节点(Node)、资源(Resource)、迁移(Movement)、物品(Item)和路径(Path)。
## ERP5中如何应用统一业务模型
UBM统一业务模型完全集成在ERP5的实现中。ERP5中的所有文档都基于该模型设计,ERP5中的所有工具也都采用该模型统一实现不同的业务活动,例如交易、生产、客户关系管理、财务和项目管理。
# 星际争霸2的大学专业?
[This class] does not teach about Starcraft, but rather aims to utilize the game and the complex situations that arise within it to present and develop the important skills professionals will undoubtedly need in the 21st Century workplace.
(这个课程)不会教你关于星际争霸的知识,而是利用游戏中出现的各种复杂情况来展现和研究在21世纪的职场中所需要的各种重要的职业技巧。
This course includes required weekly game play, viewing and analysis of recorded matches, written assignments which emphasize analysis and synthesis of real/game-world concepts, and collaboration with other students.
via crunchgear
# 10 Ways We Hurt Our Romantic Relationships
10 Ways We Hurt Our Romantic Relationshipsx
It's not easy to have a great relationship with your boy/girlfriend, partner, or spouse. But it's not impossible, either — it takes some work, of course, but it's work, work that's a joy when everything comes together.
A lot of times, though, the work isn't enough. We get in our own way with ideas and attitudes about relationships that are not only wrong, but often work to undermine our relationships no matter how hard we work at it.
I've watched a lot of breakups (some of them my own). I've seen dramatic flare-ups and drawn-out slow fades, and I've tried to pay attention to what seems to be going on. Here are a few of the things I've seen that cause people to destroy their own relationships.
1. You're playing to win
事事争胜
One of the deadliest killers of relationships is the competitive urge. I don't mean competition in the sense that you can't stand to lose at tennis, I mean the attitude that the relationship itself is a kind of game that you're tying to win. People in competitive relationships are always looking for an advantage, the upper hand, some edge they can hold over their partner's head. If you feel that there are things you can't tell your partner because she or he will use it against you, you're in a competitive relationship — but not for long.
2. You don't trust
缺乏信任
There are two aspects of trust that are important in relationships. One is trusting your partner enough to know that s/he won't cheat on you or otherwise hurt you — and to know that he or she trusts you that way, too. The other is trusting them enough to know they won't leave you or stop loving you no matter what you do or say. The second that level of trust is gone, whether because one of you takes advantage of that trust and does something horrible or because one of you thinks the other has, the relationship is over — even if it takes 10 more years for you to break up.
3. You don't talk
欠缺沟通
Too many people hold their tongues about things that bother or upset them in their relationship, either because they don't want to hurt their partner, or because they're trying to win. (See #1 above; example: “If you don't know why I'm mad, I'm certainly not going to tell you!”) While this might make things easier in the short term, in the long run it gradually erodes the foundation of the relationship away. Little issues grow into bigger and bigger problems — problems that don't get fixed because your partner is blissfully unaware, or worse, is totally aware of them but thinks they don't really bother you. Ultimately, keeping quiet reflects a lack of trust — and, as I said that's the death of a relationship.
4. You don't listen
不会倾听
Listening — really listening — is hard. It's normal to want to defend ourselves when we hear something that seems like criticism, so instead of really hearing someone out, we interrupt to explain or excuse ourselves, or we turn inward to prepare our defense. But your partner deserves your active listening. S/he even deserves you to hear the between-the-lines content of daily chit-chat, to suss out his/her dreams and desires when even s/he doesn't even know exactly what they are. If you can't listen that way, at least to the person you love, there's a problem.
5. You spend like a single person
自己过自己的
This was a hard lesson for me to learn — until it broke up a 7-year relationship. When you're single, you can buy whatever you want, whenever you want, with little regard for the future. It's not necessarily wise, but you're the only one who has to pay the consequences. When you are with someone in a long-term relationship, that is no longer a possibility. Your partner — and your children, if there are or will be any — will have to bear the brunt of your spending, so you'd better get in the habit of taking care of household necessities first and then, if there's anything left over, of discussing with your partner the best way to use it.
This is an increasing problem these days, because more and more people are opting to keep their finances separate, even when they're married. There's nothing wrong with that kind of arrangement in and of itself, but it demands more communication and involvement between the partners, not less. If you're spending money as if it was your money and nobody else has a right to tell you what to do with it, your relationship is doomed.
6. You're afraid of breaking up
害怕分手
Nobody in a truly happy partnership is afraid of breaking up. If you are, that's a big warning sign that something's wrong. But often, what's wrong is the fear itself. Not only does it betray a lack of trust, but it shows a lack of self-confidence and self-esteem — you're afraid that there's no good reason for someone to want to be with you, and that sooner or later your partner will “wise up” and take off. So you pour more energy into keeping up the appearance of a happy relationships than you do into building yourself up as a person. Quite frankly, this isn't going to be very satisfying for you, and it also isn't going to be very satisfying for your partner.
7. You're dependent
依赖性强
There's a thin line between companionship and support and dependency. If you depend on your partner — that is, if you absolutely cannot live without her or him — you've crossed that line. The pressure is now on your partner to fill whatever's missing in you — a pressure s/he will learn to resent. If you expect your partner to bring everything while you bring nothing to your relationship — and I'm talking finances as well as emotional support, here — you're in trouble. (Note: I'm not saying that you need to contribute equally to household finances — what I'm saying is that if you're not contributing to the household budget, and you're not contributing anywhere else, things are out of whack and that's never good.)
8. You expect Happiness
期望得到快乐
A sign of a bad relationship is that one or both partners expect either to make the other happy or for their partner to make them happy. This is not only an unrealistic expectation to lay on yourself or on them — nobody can “make” you happy, except you — but it's an unrealistic expectation to lay on your relationship. Relationships aren't only about being happy, and there's lots of times when you won't and even shouldn't be. Being able to rely on someone even when you're upset, miserable, depressed, or grieving is a lot more important than being happy all the time. If you expect your partner to make you happy — or worse, you're frustrated because you aren't able to make your partner happy — your relationship isn't going to fare well when it hits a rough spot.
9. You never fight
从不吵架
A good argument is essential, every now and then. In part, arguing helps bring out the little stuff before it becomes major, but also, fighting expresses anger which is a perfectly normal part of a human's emotional make-up. Your relationship has to be strong enough to hold all of who you are, not just the sunny stuff.
One reason couples don't fight is that they fear conflict — which reflects a lack of trust and a foundation of fear. That's bad. Another reason couples avoid arguments is that they've learned that anger is unreasonable and unproductive. They've learned that arguing represents a breakdown rather than a natural part of a relationship's development. While an argument isn't pleasant, it can help both partners to articulate issues they may not have even known they had — and help keep them from simmering until you cross a line you can't come back from.
10. You expect it to be easy/you expect it to be hard
把爱情想的很轻松或者很难
There are two deeply problematic attitudes about relationships I hear often. One is that a relationship should be easy, that if you really love each other and are meant to be together, it will work itself out. The other is that anything worth having is going to be hard — and that therefore if it's hard, it must be worth having.
The outcome of both views is that you don't work at your relationship. You don't work because it's supposed to be easy and therefore not need any work, or you don't work because it's supposed to be hard and it wouldn't be hard if you worked at it. In both cases, you quickly get burnt out — either because the problems you're ignoring really don't go away just because you think they should. or because the problems you're cultivating are a constant drag on your energy. A relationship that's too much work might be suffering from one of the attitudes above, but a relationship that doesn't seem to need any work isn't any better.
There isn't any one answer to any of the problems above. There are choices though: you can either seek out an answer, something that addresses why you are hurting your relationship, or you can resign yourself to the failure of your relationship (and maybe the next one, and the next one, and…). Failure doesn't always mean you break up — many people aren't that lucky. But people can live quite unhappily in failed relationships for years and even decades because they're afraid they won't find anything better, or worse, they're afraid they deserve it. Don't you be one of them — if you suffer from any of these problems, figure out how to fix it, whether that means therapy, a solo mountain retreat, or just talking to your partner and committing yourselves to change.
# 避免Android开发中的ANR
## ANR是什么
ANRs (“Application Not Responding”),意思是”应用没有响应“。
– 主线程 (“事件处理线程” / “UI线程”) 在5秒内没有响应输入事件
1、在主线程内进行网络操作
2、在主线程内进行一些缓慢的磁盘操作(例如执行没有优化过的SQL查询)
## 一些数据(Nexus One为例)
• ~0.04 ms – 通过管道进程从A->B再从B->A写一个字节;或者(从dalvik)读一个简单的/proc文件
• ~0.12 ms – 由A->B 再由B->A 进行一次Binder的RPC调用
• ~5-25 ms – 从未缓冲的flash
• ~5-200+(!) ms – 向为缓冲的flash中写点东西(下面是具体数据)
• 16 ms – 60fps的视频中的一帧
• 41 ms – 24fps的视频中的一帧
• 100-200 ms – human perception of slow action
• 108/350/500/800 ms – 3G网络上ping(可变)
• ~1-6+ seconds – 通过HTTP在3G网络上获取6k的数据
private class DownloadFilesTask extends AsyncTask {
protected Long doInBackground(URL... urls) { // on some background thread
int count = urls.length;
long totalSize = 0;
for (int i = 0; i < count; i++) {
publishProgress((int) ((i / (float) count) * 100));
}
}
protected void onProgressUpdate(Integer... progress) { // on UI thread!
setProgressPercent(progress[0]);
}
protected void onPostExecute(Long result) { // on UI thread!
}
}
private boolean handleWebSearchRequest(final ContentResolver cr) {
...
protected Void doInBackground(Void... unused) {
Browser.updateVisitedHistory(cr, newUrl, false);
return null;
}
}.execute()
...
return true;
}
1、必须从主线程调用,或者线程中有Handler或Looper。
• 用户退出了activity
• 系统内存不足
• 系统暂存了activity的状态留待后用
• 系统干掉了你的线程
## android.app.IntentService
Eclair(2.0, 2.1)文档中说:
“An abstract Service that serializes the handling of the Intents passed upon service start and handles them on a handler thread. To use this class extend it and implement onHandleIntent(Intent). The Service will automatically be stopped when the last enqueued Intent is handled.”
Froyo (2.2) 的文档, 又澄清了一下....
android.app.IntentService
“IntentService is a base class for Services that handle asynchronous requests (expressed as Intents) on demand. Clients send requests through startService(Intent) calls; the service is started as needed, handles each Intent in turn using a worker thread, and stops itself when it runs out of work.
This 'work queue processor' pattern is commonly used to offload tasks from an application's main thread. The IntentService class exists to simplify this pattern and take care of the mechanics. To use it, extend IntentService and implement onHandleIntent(Intent). IntentService will receive the Intents, launch a worker thread, and stop the service as appropriate.
All requests are handled on a single worker thread -- they may take as long as necessary (and will not block the application's main loop), but only one request will be processed at a time.”
## IntentService 的好处
• Acitivity的进程,当处理Intent的时候,会产生一个对应的Service
• Android的进程处理器现在会尽可能的不kill掉你
• 非常容易使用
public class DismissAllAlarmsService extends IntentService {
@Override public void onHandleIntent(Intent unusedIntent) {
ContentResolver resolver = getContentResolver();
...
resolver.update(uri, values, selection, null);
}
}
Intent intent = new Intent(context, DismissAllAlarmsService.class);
context.startService(intent);
## 其它技巧
2、显示一些动画,表示在处理中
3、使用进度条对话框
5、当不确定要耗时多久的时候,组合使用上述所有方法
## 总结
• 离开主线程!
• 磁盘和网络操作不是马上就能完的
• 了解sqlite在干嘛
• 进度展示很好
PS,在视频讲座中,作者还提到,Chrome团队为了避免Jank(响应超时而死掉),几乎所有的功能和任务都会在子线程里面去做。这一点也值得在Android中借鉴。
|
2019-04-26 01:59:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1741003543138504, "perplexity": 2220.6201813425537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578747424.89/warc/CC-MAIN-20190426013652-20190426034829-00068.warc.gz"}
|
https://www.nag.com/numeric/nl/nagdoc_latest/flhtml/h/h03adf.html
|
▸▿ Contents
Settings help
FL Name Style:
FL Specification Language:
1Purpose
h03adf finds the shortest path through a directed or undirected acyclic network using Dijkstra's algorithm.
2Specification
Fortran Interface
Subroutine h03adf ( n, ns, ne, nnz, d, irow, icol, path, work,
Integer, Intent (In) :: n, ns, ne, nnz, icol(nnz) Integer, Intent (Inout) :: irow(nnz), ifail Integer, Intent (Out) :: path(n), iwork(3*n+1) Real (Kind=nag_wp), Intent (In) :: d(nnz) Real (Kind=nag_wp), Intent (Out) :: splen, work(2*n) Logical, Intent (In) :: direct
#include <nag.h>
void h03adf_ (const Integer *n, const Integer *ns, const Integer *ne, const logical *direct, const Integer *nnz, const double d[], Integer irow[], const Integer icol[], double *splen, Integer path[], Integer iwork[], double work[], Integer *ifail)
The routine may be called by the names h03adf or nagf_mip_shortestpath.
3Description
h03adf attempts to find the shortest path through a directed or undirected acyclic network, which consists of a set of points called vertices and a set of curves called arcs that connect certain pairs of distinct vertices. An acyclic network is one in which there are no paths connecting a vertex to itself. An arc whose origin vertex is $i$ and whose destination vertex is $j$ can be written as $i\to j$. In an undirected network the arcs $i\to j$ and $j\to i$ are equivalent (i.e., $i↔j$), whereas in a directed network they are different. Note that the shortest path may not be unique and in some cases may not even exist (e.g., if the network is disconnected).
The network is assumed to consist of $n$ vertices which are labelled by the integers $1,2,\dots ,n$. The lengths of the arcs between the vertices are defined by the $n×n$ distance matrix ${\mathbf{d}}$, in which the element ${d}_{ij}$ gives the length of the arc $i\to j$; ${d}_{ij}=0$ if there is no arc connecting vertices $i$ and $j$ (as is the case for an acyclic network when $i=j$). Thus the matrix $D$ is usually sparse. For example, if $n=4$ and the network is directed, then
$d=( 0 d12 d13 d14 d21 0 d23 d24 d31 d32 0 d34 d41 d42 d43 0 ) .$
If the network is undirected, ${\mathbf{d}}$ is symmetric since ${d}_{ij}={d}_{ji}$ (i.e., the length of the arc $i\to j\equiv \text{}$ the length of the arc $j\to i$).
The method used by h03adf is described in detail in Section 9.
4References
Dijkstra E W (1959) A note on two problems in connection with graphs Numer. Math. 1 269–271
5Arguments
1: $\mathbf{n}$Integer Input
On entry: $n$, the number of vertices.
Constraint: ${\mathbf{n}}\ge 2$.
2: $\mathbf{ns}$Integer Input
3: $\mathbf{ne}$Integer Input
On entry: ${n}_{s}$ and ${n}_{e}$, the labels of the first and last vertices, respectively, between which the shortest path is sought.
Constraints:
• $1\le {\mathbf{ns}}\le {\mathbf{n}}$;
• $1\le {\mathbf{ne}}\le {\mathbf{n}}$;
• ${\mathbf{ns}}\ne {\mathbf{ne}}$.
4: $\mathbf{direct}$Logical Input
On entry: indicates whether the network is directed or undirected.
${\mathbf{direct}}=\mathrm{.TRUE.}$
The network is directed.
${\mathbf{direct}}=\mathrm{.FALSE.}$
The network is undirected.
5: $\mathbf{nnz}$Integer Input
On entry: the number of nonzero elements in the distance matrix $D$.
Constraints:
• if ${\mathbf{direct}}=\mathrm{.TRUE.}$, $1\le {\mathbf{nnz}}\le {\mathbf{n}}×\left({\mathbf{n}}-1\right)$;
• if ${\mathbf{direct}}=\mathrm{.FALSE.}$, $1\le {\mathbf{nnz}}\le {\mathbf{n}}×\left({\mathbf{n}}-1\right)/2$.
6: $\mathbf{d}\left({\mathbf{nnz}}\right)$Real (Kind=nag_wp) array Input
On entry: the nonzero elements of the distance matrix $D$, ordered by increasing row index and increasing column index within each row. More precisely, ${\mathbf{d}}\left(k\right)$ must contain the value of the nonzero element with indices (${\mathbf{irow}}\left(k\right),{\mathbf{icol}}\left(k\right)$); this is the length of the arc from the vertex with label ${\mathbf{irow}}\left(k\right)$ to the vertex with label ${\mathbf{icol}}\left(k\right)$. Elements with the same row and column indices are not allowed. If ${\mathbf{direct}}=\mathrm{.FALSE.}$, then only those nonzero elements in the strict upper triangle of ${\mathbf{d}}$ need be supplied since ${d}_{ij}={d}_{ji}$. (f11zaf may be used to sort the elements of an arbitrarily ordered matrix into the required form.)
Constraint: ${\mathbf{d}}\left(\mathit{k}\right)>0.0$, for $\mathit{k}=1,2,\dots ,{\mathbf{nnz}}$.
7: $\mathbf{irow}\left({\mathbf{nnz}}\right)$Integer array Input
8: $\mathbf{icol}\left({\mathbf{nnz}}\right)$Integer array Input
On entry: ${\mathbf{irow}}\left(k\right)$ and ${\mathbf{icol}}\left(k\right)$ must contain the row and column indices, respectively, for the nonzero element stored in ${\mathbf{d}}\left(k\right)$.
Constraints:
irow and icol must satisfy the following constraints (which may be imposed by a call to f11zaf):
• ${\mathbf{irow}}\left(k-1\right)<{\mathbf{irow}}\left(k\right)$;
• ${\mathbf{irow}}\left(\mathit{k}-1\right)={\mathbf{irow}}\left(\mathit{k}\right)$ and ${\mathbf{icol}}\left(\mathit{k}-1\right)<{\mathbf{icol}}\left(\mathit{k}\right)$, for $\mathit{k}=2,3,\dots ,{\mathbf{nnz}}$.
In addition, if ${\mathbf{direct}}=\mathrm{.TRUE.}$, $1\le {\mathbf{irow}}\left(k\right)\le {\mathbf{n}}$, $1\le {\mathbf{icol}}\left(k\right)\le {\mathbf{n}}$ and ${\mathbf{irow}}\left(k\right)\ne {\mathbf{icol}}\left(k\right)$;
• if ${\mathbf{direct}}=\mathrm{.FALSE.}$, $1\le {\mathbf{irow}}\left(k\right)<{\mathbf{icol}}\left(k\right)\le {\mathbf{n}}$.
9: $\mathbf{splen}$Real (Kind=nag_wp) Output
On exit: the length of the shortest path between the specified vertices ${n}_{s}$ and ${n}_{e}$.
10: $\mathbf{path}\left({\mathbf{n}}\right)$Integer array Output
On exit: contains details of the shortest path between the specified vertices ${n}_{s}$ and ${n}_{e}$. More precisely, ${\mathbf{ns}}={\mathbf{path}}\left(1\right)\to {\mathbf{path}}\left(2\right)\to \cdots \to {\mathbf{path}}\left(p\right)={\mathbf{ne}}$ for some $p\le n$. The remaining $\left(n-p\right)$ elements are set to zero.
11: $\mathbf{iwork}\left(3×{\mathbf{n}}+1\right)$Integer array Workspace
12: $\mathbf{work}\left(2×{\mathbf{n}}\right)$Real (Kind=nag_wp) array Workspace
13: $\mathbf{ifail}$Integer Input/Output
On entry: ifail must be set to $0$, $-1$ or $1$ to set behaviour on detection of an error; these values have no effect when no error is detected.
A value of $0$ causes the printing of an error message and program execution will be halted; otherwise program execution continues. A value of $-1$ means that an error message is printed while a value of $1$ means that it is not.
If halting is not appropriate, the value $-1$ or $1$ is recommended. If message printing is undesirable, then the value $1$ is recommended. Otherwise, the value $0$ is recommended. When the value $-\mathbf{1}$ or $\mathbf{1}$ is used it is essential to test the value of ifail on exit.
On exit: ${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).
6Error Indicators and Warnings
If on entry ${\mathbf{ifail}}=0$ or $-1$, explanatory error messages are output on the current error message unit (as defined by x04aaf).
Errors or warnings detected by the routine:
${\mathbf{ifail}}=1$
On entry, ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{n}}\ge 2$.
On entry, ${\mathbf{ne}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$.
Constraint: $1\le {\mathbf{ne}}\le {\mathbf{n}}$.
On entry, ${\mathbf{ns}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$.
Constraint: $1\le {\mathbf{ns}}\le {\mathbf{n}}$.
On entry, ${\mathbf{ns}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{ne}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{ns}}\ne {\mathbf{ne}}$.
${\mathbf{ifail}}=2$
On entry, ${\mathbf{nnz}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$.
Constraint: if ${\mathbf{direct}}=\mathrm{.FALSE.}$, $1\le {\mathbf{nnz}}\le {\mathbf{n}}×\left({\mathbf{n}}-1\right)/2$.
On entry, ${\mathbf{nnz}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$.
Constraint: if ${\mathbf{direct}}=\mathrm{.TRUE.}$, $1\le {\mathbf{nnz}}\le {\mathbf{n}}×\left({\mathbf{n}}-1\right)$.
${\mathbf{ifail}}=3$
On entry, $k=⟨\mathit{\text{value}}⟩$, ${\mathbf{irow}}\left(k\right)=⟨\mathit{\text{value}}⟩$, ${\mathbf{icol}}\left(k\right)=⟨\mathit{\text{value}}⟩$ and ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$.
Constraint: $1\le {\mathbf{irow}}\left(k\right)\le {\mathbf{n}}$, $1\le {\mathbf{icol}}\left(k\right)\le {\mathbf{n}}$; ${\mathbf{icol}}\left(k\right)\ne {\mathbf{irow}}\left(k\right)$ when ${\mathbf{direct}}=\mathrm{.TRUE.}$.
${\mathbf{ifail}}=4$
On entry, $k=⟨\mathit{\text{value}}⟩$, ${\mathbf{irow}}\left(k\right)=⟨\mathit{\text{value}}⟩$, ${\mathbf{icol}}\left(k\right)=⟨\mathit{\text{value}}⟩$ and ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$.
Constraint: $1\le {\mathbf{irow}}\left(k\right)<{\mathbf{icol}}\left(k\right)\le {\mathbf{n}}$ when ${\mathbf{direct}}=\mathrm{.FALSE.}$.
${\mathbf{ifail}}=5$
On entry, $k=⟨\mathit{\text{value}}⟩$, ${\mathbf{d}}\left(k\right)=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{d}}\left(k\right)>0.0$.
${\mathbf{ifail}}=6$
On entry, $k=⟨\mathit{\text{value}}⟩$, ${\mathbf{irow}}\left(k\right)=⟨\mathit{\text{value}}⟩$, ${\mathbf{irow}}\left(k-1\right)=⟨\mathit{\text{value}}⟩$, ${\mathbf{icol}}\left(k\right)=⟨\mathit{\text{value}}⟩$, ${\mathbf{icol}}\left(k-1\right)=⟨\mathit{\text{value}}⟩$.
Constraints: ${\mathbf{irow}}\left(k-1\right)<{\mathbf{irow}}\left(k\right)$ or ${\mathbf{irow}}\left(k-1\right)={\mathbf{irow}}\left(k\right)$ and ${\mathbf{icol}}\left(k-1\right)<{\mathbf{icol}}\left(k\right)$.
${\mathbf{ifail}}=7$
On entry, $k=⟨\mathit{\text{value}}⟩$ ${\mathbf{irow}}\left(k\right)=⟨\mathit{\text{value}}⟩$, ${\mathbf{irow}}\left(k-1\right)=⟨\mathit{\text{value}}⟩$ ${\mathbf{icol}}\left(k\right)=⟨\mathit{\text{value}}⟩$, ${\mathbf{icol}}\left(k-1\right)=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{irow}}\left(k\right)\ne {\mathbf{irow}}\left(k-1\right)$ or ${\mathbf{icol}}\left(k\right)\ne {\mathbf{icol}}\left(k-1\right)$.
${\mathbf{ifail}}=8$
On entry, ${\mathbf{ns}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{ne}}=⟨\mathit{\text{value}}⟩$.
No connected network exists between vertices ns and ne.
${\mathbf{ifail}}=-99$
See Section 7 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.
See Section 9 in the Introduction to the NAG Library FL Interface for further information.
7Accuracy
The results are exact, except for the obvious rounding errors in summing the distances in the length of the shortest path.
8Parallelism and Performance
h03adf is based upon Dijkstra's algorithm (see Dijkstra (1959)), which attempts to find a path ${n}_{s}\to {n}_{e}$ between two specified vertices ${n}_{s}$ and ${n}_{e}$ of shortest length $d\left({n}_{s},{n}_{e}\right)$.
The algorithm proceeds by assigning labels to each vertex, which may be temporary or permanent. A temporary label can be changed, whereas a permanent one cannot. For example, if vertex $p$ has a permanent label $\left(q,r\right)$, then $r$ is the distance $d\left({n}_{s},r\right)$ and $q$ is the previous vertex on a shortest length ${n}_{s}\to p$ path. If the label is temporary, then it has the same meaning but it refers only to the shortest ${n}_{s}\to p$ path found so far. A shorter one may be found later, in which case the label may become permanent.
The algorithm consists of the following steps.
1. 1.Assign the permanent label $\left(-,0\right)$ to vertex ${n}_{s}$ and temporary labels $\left(-,\infty \right)$ to every other vertex. Set $k={n}_{s}$ and go to 2.
2. 2.Consider each vertex $y$ adjacent to vertex $k$ with a temporary label in turn. Let the label at $k$ be $\left(p,q\right)$ and at $y\left(r,s\right)$. If $q+{d}_{ky}, then a new temporary label $\left(k,q+{d}_{ky}\right)$ is assigned to vertex $y$; otherwise no change is made in the label of $y$. When all vertices $y$ with temporary labels adjacent to $k$ have been considered, go to 3.
3. 3.From the set of temporary labels, select the one with the smallest second component and declare that label to be permanent. The vertex it is attached to becomes the new vertex $k$. If $k={n}_{e}$ go to 4. Otherwise go to 2 unless no new vertex can be found (e.g., when the set of temporary labels is ‘empty’ but $k\ne {n}_{e}$, in which case no connected network exists between vertices ${n}_{s}$ and ${n}_{e}$).
4. 4.To find the shortest path, let $\left(y,z\right)$ denote the label of vertex ${n}_{e}$. The column label ($z$) gives $d\left({n}_{s},{n}_{e}\right)$ while the row label ($y$) then links back to the previous vertex on a shortest length ${n}_{s}\to {n}_{e}$ path. Go to vertex $y$. Suppose that the (permanent) label of vertex $y$ is $\left(w,x\right)$, then the next previous vertex is $w$ on a shortest length ${n}_{s}\to y$ path. This process continues until vertex ${n}_{s}$ is reached. Hence the shortest path is
$ns→⋯→w→y→ne,$
which has length $d\left({n}_{s},{n}_{e}\right)$.
10Example
This example finds the shortest path between vertices $1$ and $11$ for the undirected network
|
2022-06-30 20:05:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 206, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9479662775993347, "perplexity": 1097.4899658952572}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103877410.46/warc/CC-MAIN-20220630183616-20220630213616-00521.warc.gz"}
|
https://search.r-project.org/CRAN/refmans/bayess/html/reconstruct.html
|
reconstruct {bayess} R Documentation
## Image reconstruction for the Potts model with six classes
### Description
This function adresses the reconstruction of an image distributed from a Potts model based on a noisy version of this image. The purpose of image segmentation (Chapter 8) is to cluster pixels into homogeneous classes without supervision or preliminary definition of those classes, based only on the spatial coherence of the structure. The underlying algorithm is an hybrid Gibbs sampler.
### Usage
reconstruct(niter, y)
### Arguments
niter number of Gibbs iterations y blurred image defined as a matrix
### Details
Using a Potts model on the true image, and uniform priors on the genuine parameters of the model, the hybrid Gibbs sampler generates the image pixels and the other parameters one at a time, the hybrid stage being due to the Potts model parameter, since it implies using a numerical integration via integrate. The code includes (or rather excludes!) the numerical integration via the vector dali, which contains the values of the integration over a 21 point grid, since this numerical integration is extremely time-consuming.
### Value
beta MCMC chain for the parameter \beta of the Potts model mu MCMC chain for the mean parameter of the blurring model sigma MCMC chain for the variance parameter of the blurring model xcum frequencies of simulated colours at every pixel of the image
Menteith
### Examples
## Not run: data(Menteith)
lm3=as.matrix(Menteith)
#warning, this step is a bit lengthy
titus=reconstruct(20,lm3)
#allocation function
affect=function(u) order(u)[6]
#
aff=apply(titus\$xcum,1,affect)
aff=t(matrix(aff,100,100))
par(mfrow=c(2,1))
image(1:100,1:100,lm3,col=gray(256:1/256),xlab="",ylab="")
image(1:100,1:100,aff,col=gray(6:1/6),xlab="",ylab="")
## End(Not run)
[Package bayess version 1.4 Index]
|
2022-05-28 13:43:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6392885446548462, "perplexity": 1711.3341470154978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016853.88/warc/CC-MAIN-20220528123744-20220528153744-00730.warc.gz"}
|
https://www.neuraldump.net/glossary/arithmetic-series/
|
# arithmetic series
In mathematics, an arithmetic series is the sum of a finite arithmetic progression. Consider the arithmetic series consisting of the first five terms of arithmetic sequence of natural numbers: 1 + 2 + 3 + 4 + 5. The sum is 15 which is easily calculated using simple addition. For longer series with larger numbers it is more useful to use the following formula to calculate the sum.
For an arithmetic series starting at a number , ending at a number , containing n terms, the sum of the series is:
« Back to Glossary Index
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
2020-09-20 13:18:08
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.826163649559021, "perplexity": 297.84828497183315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198213.25/warc/CC-MAIN-20200920125718-20200920155718-00224.warc.gz"}
|
https://socratic.org/questions/how-do-you-solve-3x-9x-10-2-17
|
How do you solve 3x + 9x + 10- 2= 17?
Oct 28, 2016
$x = \frac{3}{4}$
Explanation:
$3 x + 9 x + 10 - 2 = 17$
Subtract $8$ from each side:
$3 x + 9 x = 9$
$12 x = 9$
$x = \frac{9}{12} = \frac{3}{4}$
|
2021-12-08 03:43:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9222649931907654, "perplexity": 2941.378831453719}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363437.15/warc/CC-MAIN-20211208022710-20211208052710-00315.warc.gz"}
|
http://mathhelpforum.com/trigonometry/23702-i-need-help-these-3-stumping-problems.html
|
## I need help on these 3 stumping problems
Find each funtion value, rounded correctly to 4 digits.
sin 239.25 degrees =____
Find the reference angle of each of the following correct to the nearest minute.
arc cosecant= 23.55 arc reference angle =________
Find each angle described below correct to the nearest minute.
arc secant= 3.291 90 degrees< arc< 180 degrees
I have little idea on these problems so can someone help me and show their work so i can finish rest on my own.
|
2017-02-23 08:04:35
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8482701182365417, "perplexity": 2661.714732512273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171162.4/warc/CC-MAIN-20170219104611-00484-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/ipi.2011.5.75
|
# American Institute of Mathematical Sciences
February 2011, 5(1): 75-93. doi: 10.3934/ipi.2011.5.75
## An algorithm for recovering unknown projection orientations and shifts in 3-D tomography
1 Department of Mathematics and Physics, Lappeenranta University of Technology, Lappeenranta, Finland 2 Department of Mathematics and Statistics, University of Helsinki, P.O. Box 68, 00014 Helsinki,
Received March 2010 Revised July 2010 Published February 2011
It is common for example in Cryo-electron microscopy of viruses, that the orientations at which the projections are acquired, are totally unknown. We introduce here a moment based algorithm for recovering them in the three-dimensional parallel beam tomography. In this context, there is likely to be also unknown shifts in the projections. They will be estimated simultaneously. Also stability properties of the algorithm are examined. Our considerations rely on recent results that guarantee a solution to be almost always unique. A similar analysis can also be done in the two-dimensional problem.
Citation: Jaakko Ketola, Lars Lamberg. An algorithm for recovering unknown projection orientations and shifts in 3-D tomography. Inverse Problems & Imaging, 2011, 5 (1) : 75-93. doi: 10.3934/ipi.2011.5.75
##### References:
show all references
##### References:
[1] Henri Schurz. Moment attractivity, stability and contractivity exponents of stochastic dynamical systems. Discrete & Continuous Dynamical Systems - A, 2001, 7 (3) : 487-515. doi: 10.3934/dcds.2001.7.487 [2] Jan Boman, Vladimir Sharafutdinov. Stability estimates in tensor tomography. Inverse Problems & Imaging, 2018, 12 (5) : 1245-1262. doi: 10.3934/ipi.2018052 [3] Yi Zhang, Yuyun Zhao, Tao Xu, Xin Liu. $p$th Moment absolute exponential stability of stochastic control system with Markovian switching. Journal of Industrial & Management Optimization, 2016, 12 (2) : 471-486. doi: 10.3934/jimo.2016.12.471 [4] Chengjun Guo, Chengxian Guo, Sameed Ahmed, Xinfeng Liu. Moment stability for nonlinear stochastic growth kinetics of breast cancer stem cells with time-delays. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2473-2489. doi: 10.3934/dcdsb.2016056 [5] Gregory Berkolaiko, Cónall Kelly, Alexandra Rodkina. Sharp pathwise asymptotic stability criteria for planar systems of linear stochastic difference equations. Conference Publications, 2011, 2011 (Special) : 163-173. doi: 10.3934/proc.2011.2011.163 [6] Yuyun Zhao, Yi Zhang, Tao Xu, Ling Bai, Qian Zhang. pth moment exponential stability of hybrid stochastic functional differential equations by feedback control based on discrete-time state observations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (1) : 209-226. doi: 10.3934/dcdsb.2017011 [7] Antonios Zagaris, Christophe Vandekerckhove, C. William Gear, Tasso J. Kaper, Ioannis G. Kevrekidis. Stability and stabilization of the constrained runs schemes for equation-free projection to a slow manifold. Discrete & Continuous Dynamical Systems - A, 2012, 32 (8) : 2759-2803. doi: 10.3934/dcds.2012.32.2759 [8] Roland Pulch. Stability preservation in Galerkin-type projection-based model order reduction. Numerical Algebra, Control & Optimization, 2019, 9 (1) : 23-44. doi: 10.3934/naco.2019003 [9] Tomás Caraballo, Leonid Shaikhet. Stability of delay evolution equations with stochastic perturbations. Communications on Pure & Applied Analysis, 2014, 13 (5) : 2095-2113. doi: 10.3934/cpaa.2014.13.2095 [10] K Najarian. On stochastic stability of dynamic neural models in presence of noise. Conference Publications, 2003, 2003 (Special) : 656-663. doi: 10.3934/proc.2003.2003.656 [11] Miljana Jovanović, Vuk Vujović. Stability of stochastic heroin model with two distributed delays. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 0-0. doi: 10.3934/dcdsb.2020016 [12] Lars Lamberg. Unique recovery of unknown projection orientations in three-dimensional tomography. Inverse Problems & Imaging, 2008, 2 (4) : 547-575. doi: 10.3934/ipi.2008.2.547 [13] Fuke Wu, George Yin, Le Yi Wang. Razumikhin-type theorems on moment exponential stability of functional differential equations involving two-time-scale Markovian switching. Mathematical Control & Related Fields, 2015, 5 (3) : 697-719. doi: 10.3934/mcrf.2015.5.697 [14] P. Bai, H.T. Banks, S. Dediu, A.Y. Govan, M. Last, A.L. Lloyd, H.K. Nguyen, M.S. Olufsen, G. Rempala, B.D. Slenning. Stochastic and deterministic models for agricultural production networks. Mathematical Biosciences & Engineering, 2007, 4 (3) : 373-402. doi: 10.3934/mbe.2007.4.373 [15] Miroslava Růžičková, Irada Dzhalladova, Jitka Laitochová, Josef Diblík. Solution to a stochastic pursuit model using moment equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 473-485. doi: 10.3934/dcdsb.2018032 [16] Tomás Caraballo, José Real, T. Taniguchi. The exponential stability of neutral stochastic delay partial differential equations. Discrete & Continuous Dynamical Systems - A, 2007, 18 (2&3) : 295-313. doi: 10.3934/dcds.2007.18.295 [17] Boris P. Belinskiy, Peter Caithamer. Stochastic stability of some mechanical systems with a multiplicative white noise. Conference Publications, 2003, 2003 (Special) : 91-99. doi: 10.3934/proc.2003.2003.91 [18] Zhiping Chen, Youpan Han. Continuity and stability of two-stage stochastic programs with quadratic continuous recourse. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 197-209. doi: 10.3934/naco.2015.5.197 [19] Min Zhu, Panpan Ren, Junping Li. Exponential stability of solutions for retarded stochastic differential equations without dissipativity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2923-2938. doi: 10.3934/dcdsb.2017157 [20] Alexandra Rodkina, Henri Schurz, Leonid Shaikhet. Almost sure stability of some stochastic dynamical systems with memory. Discrete & Continuous Dynamical Systems - A, 2008, 21 (2) : 571-593. doi: 10.3934/dcds.2008.21.571
2018 Impact Factor: 1.469
|
2019-12-14 10:23:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4164327085018158, "perplexity": 8601.17892591225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540586560.45/warc/CC-MAIN-20191214094407-20191214122407-00524.warc.gz"}
|
http://ckms.kms.or.kr/journal/view.html?doi=10.4134/CKMS.c200209
|
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnline Submission ㆍMy Manuscript - For Reviewers - For Editors
Symmetricity and reversibility from the perspective of nilpotents Commun. Korean Math. Soc.Published online November 9, 2020 Abdullah Harmanci, Handan Kose, and Burcu Ungor Hacettepe University, Kirsehir Ahi Evran University, Ankara University Abstract : In this paper, we deal with the question that what kind of properties does a ring gain when it satisfies symmetricity or reversibility by the way of nilpotent elements? By the motivation of this question, we approach to symmetric and reversible property of rings via nilpotents. For symmetricity, we call a ring $R$ {\it middle right-} (resp. {\it left-}){\it nil symmetric} (mr-nil (resp. ml-nil) symmetric, for short) if $abc = 0$ implies $acb = 0$ (resp. $bac = 0)$ for $a$, $c\in R$ and $b\in$ nil$(R)$ where nil$(R)$ is the set of all nilpotent elements of $R$. It is proved that mr-nil symmetric rings are abelian and so directly finite. We show that the class of mr-nil symmetric rings strictly lies between the classes of symmetric rings and weak right nil-symmetric rings. For reversibility, we introduce {\it left} (resp. {\it right}) {\it N-reversible ideal} $I$ of a ring $R$ if for any $a\in$ nil$(R)$, $b\in R$, being $ab \in I$ implies $ba \in I$ (resp. $b\in$ nil$(R)$, $a\in R$, being $ab \in I$ implies $ba \in I$). A ring $R$ is called {\it left} (resp. {\it right}) {\it N-reversible} if the zero ideal is left (resp. right) N-reversible. Left N-reversibility is a generalization of mr-nil symmetricity. We exactly determine the place of the class of left N-reversible rings which is placed between the classes of reversible rings and CNZ rings. We also obtain that every left N-reversible ring is nil-Armendariz. It is observed that the polynomial ring over a left N-reversible Armendariz ring is also left N-reversible. Keywords : Symmetric ring; middle right-nil symmetric ring; nil-symmetric ring; reversible ring; left N-reversible ring MSC numbers : 16N40; 16S99; 16U80; 16U99 Full-Text :
Copyright © Korean Mathematical Society. All Rights Reserved. The Korea Science Technology Center (Rm. 411), 22, Teheran-ro 7-gil, Gangnam-gu, Seoul 06130, Korea Tel: 82-2-565-0361 | Fax: 82-2-565-0364 | E-mail: paper@kms.or.kr | Powered by INFOrang Co., Ltd
|
2021-01-28 06:20:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7229340672492981, "perplexity": 3938.017410445904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704835901.90/warc/CC-MAIN-20210128040619-20210128070619-00537.warc.gz"}
|
http://www.phpclasses.org/discuss/package/3111/thread/2/
|
# This is a total crap package.
Wiki Parser > All threads > This is a total crap package. > (Un) Subscribe thread alerts
Subject: This is a total crap package. Package rating comment 2 Ryan Hansen 2008-02-02 13:18:46 2009-12-24 19:40:41
Ryan Hansen rated this package as follows:
1. This is a total crap package. Reply Report abuse
Ryan Hansen - 2008-02-02 13:18:46
This is a total crap package. It doesn't even work properly for Wikipedia as it claims to and there are several errors that result even from using their provided example (with wikipedia as the URL, of course). The Wikipedia page does get retrieved ok, but none of the mark-up is converted to HTML, which is the whole point of this package.
Useless.
2. Re: This is a total crap package. Reply Report abuse
Srdjan - 2009-12-24 19:40:41 - In reply to message 1 from Ryan Hansen
URL should be http://en.wikipedia.org/wiki/Special:Export/
and then it works, but generates first non-formatted data and after that good data.
|
2015-07-03 10:44:08
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9092206358909607, "perplexity": 5061.907746698444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095874.61/warc/CC-MAIN-20150627031815-00132-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://tex.stackexchange.com/questions/3397/how-do-i-get-chaptername-to-appear-in-the-table-of-contents/3401
|
In my tex file I have changed the chapter name from 'Chapter' to 'Day', using:
\renewcommand \chaptername {Day}
But when I generate a table of contents it only shows '1' and not 'Day 1'. How do I get the 'Day' to show up in the contents?
I recommend using the tocloft package, although I suspect the titletoc package could also be used quite easily. tocloft provides many hooks for content to be inserted into the table of contents.
Here's an example:
\documentclass{book}
\usepackage{tocloft,calc}
\renewcommand\chaptername{Day}
\renewcommand\cftchappresnum{\chaptername\space}
\setlength{\cftchapnumwidth}{\widthof{\textbf{Day~999~}}}
\begin{document}
\tableofcontents
\setcounter{chapter}{500}
\chapter{Hello}
\end{document}
In this case \cftchappresnum inserts its argument in the chap entry, pre (before) the snum (sectioning number).
As mentioned by Juan (thanks!), it's also necessary to increase the size of the space allocated for the "chapter number". In this case we calculate how large Day 999 would be (using the calc package) and use that length.
• In the above example you also need something like \setlength{\cftchapnumwidth}{1.4cm} to increase the space available to typeset “Day n”. Sep 23 '10 at 11:45
• @Juan: I guess you should be the answering, and not me :) Sep 23 '10 at 11:50
• This works for me when I copy your example, but I was using the Memoir class for my file, and when I try it in that I get this error: ! LaTeX Error: \cftchappresnum undefined. Sep 23 '10 at 23:41
• Why does it change the font size of table of content title? How can I resize it with my custom value. Nov 21 '16 at 5:11
• Sorry, problem has been solved. I should use title option along with tocloft package if I want to use my custom title. Anyway, thank you for this solution. Nov 22 '16 at 16:46
I probably should have mentioned it was Memoir class in the first place. Anyway, after some investigating, I worked out that I had to add the line:
\renewcommand*{\cftchaptername}{Day\space}
Which did it for me.
|
2021-11-30 12:11:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8393331170082092, "perplexity": 1455.0730099537345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358973.70/warc/CC-MAIN-20211130110936-20211130140936-00336.warc.gz"}
|
https://astarmathsandphysics.com/university-maths-notes/vector-calculus/2433-gradient-of-a-quotient.html?tmpl=component&print=1&page=
|
## Gradient of a Quotient
Theorem
Ifandare scalar fields then
Proof
For the partial derivatives with respect towe can write
|
2017-10-22 23:06:21
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9569787383079529, "perplexity": 8156.06923270034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825473.61/warc/CC-MAIN-20171022222745-20171023002745-00483.warc.gz"}
|
https://bioinformatics.stackexchange.com/questions/5359/what-is-the-most-compact-data-structure-for-canonical-k-mers-with-the-fastest-lo
|
# What is the most compact data structure for canonical k-mers with the fastest lookup time?
edit: Results are current as of Dec 4, 2018 13:00 PST.
### Background
K-mers have many uses in bioinformatics, and for this reason it would be useful to know the most RAM-efficient and fastest way to work with them programmatically. There have been questions covering what canonical k-mers are, how much RAM k-mer storage theoretically takes, but we have not yet looked at the best data structure to store and access k-mers and associated values with.
### Question
What data structure in C++ simultaneously allows the most compact k-mer storage, a property, and the fastest lookup time? For this question I choose C++ for speed, ease-of-implementation, and access to lower-level language features if desired. Answers in other languages are acceptable, too.
### Setup
• For benchmarking:
• I propose to use a standard fasta file for everyone to use. This program, generate-fasta.cpp, generates two million sequences ranging in length between 29 and 300, with a peak of sequences around length 60.
• Let's use k=29 for the analysis, but ignore implementations that require knowledge of the k-mer size before implementation. Doing so will make the resulting data structure more amenable to downstream users who may need other sizes k.
• Let's just store the most recent read that the k-mer appeared in as the property to retrieve during k-mer lookup. In most applications it is important to attach some value to each k-mer such as a taxon, its count in a dataset, et cetera.
• If possible, use the string parser in the code below for consistency between answers.
• The algorithm should use canonical k-mers. That is, a k-mer and its reverse complement are considered to be the same k-mer.
Here is generate-fasta.cpp. I used the command g++ generate_fasta.cpp -o generate_fasta to compile and the command ./generate_fasta > my.fasta to run it:
return 0;
//generate a fasta file to count k-mers
#include <iostream>
#include <random>
char gen_base(int q){
if (q <= 30){
return 'A';
} else if ((q > 30) && (q <=60) ){
return 'T';
} else if ((q > 60) && (q <=80) ){
return 'C';
} else if (q > 80){
return 'G';
}
return 'N';
}
int main() {
unsigned seed = 1;
std::default_random_engine generator (seed);
std::poisson_distribution<int> poisson (59);
std::geometric_distribution<int> geo (0.05);
std::uniform_int_distribution<int> uniform (1,100);
int printval;
int i=0;
while(i<2000000){
if (i % 2 == 0){
printval = poisson(generator);
} else {
printval = geo(generator) + 29;
}
if (printval >= 29){
std::cout << '>' << i << '\n';
//std::cout << printval << '\n';
for (int j = 0; j < printval; j++){
std::cout << gen_base(uniform(generator));
}
std::cout << '\n';
i++;
}
}
return 0;
}
### Example
One naive implementation is to add both the observed k-mer and its reverse complement as separate k-mers. This is obviously not space efficient but should have fast lookup. This file is called make_struct_lookup.cpp. I used the following command to compile on my Apple laptop (OS X): clang++ -std=c++11 -stdlib=libc++ -Wno-c++98-compat make_struct_lookup.cpp -o msl.
#include <fstream>
#include <string>
#include <map>
#include <iostream>
#include <chrono>
//build the structure. measure how much RAM it consumes.
//then measure how long it takes to lookup in the data structure
#define k 29
std::string rc(std::string seq){
std::string rc;
for (int i = seq.length()-1; i>=0; i--){
if (seq[i] == 'A'){
rc.push_back('T');
} else if (seq[i] == 'C'){
rc.push_back('G');
} else if (seq[i] == 'G'){
rc.push_back('C');
} else if (seq[i] == 'T'){
rc.push_back('A');
}
}
return rc;
}
int main(int argc, char* argv[]){
using namespace std::chrono;
//initialize the data structure
std::string thisline;
std::map<std::string, int> kmer_map;
std::string seq;
//open the fasta file
std::ifstream inFile;
inFile.open(argv[1]);
//construct the kmer-lookup structure
int i = 0;
high_resolution_clock::time_point t1 = high_resolution_clock::now();
while (getline(inFile,thisline)){
if (thisline[0] == '>'){
} else {
seq = thisline;
for (int j=0; j< thisline.size() - k + 1; j++){
}
i++;
}
}
std::cout << " -finished " << i << " seqs.\n";
inFile.close();
high_resolution_clock::time_point t2 = high_resolution_clock::now();
duration<double> time_span = duration_cast<duration<double>>(t2 - t1);
std::cout << time_span.count() << " seconds to load the array." << '\n';
//now lookup the kmers
inFile.open(argv[1]);
t1 = high_resolution_clock::now();
int lookup;
while (getline(inFile,thisline)){
if (thisline[0] != '>'){
seq = thisline;
//now lookup the kmers
for (int j=0; j< thisline.size() - k + 1; j++){
lookup = kmer_map[seq.substr(j,j+k)];
}
}
}
std::cout << " - looked at " << i << " seqs.\n";
inFile.close();
t2 = high_resolution_clock::now();
time_span = duration_cast<duration<double>>(t2 - t1);
std::cout << time_span.count() << " seconds to lookup the kmers." << '\n';
}
### Example output
I ran the above program with the following command to log peak RAM usage. The amount of time the lookup of all k-mers in two million sequences is reported by the program. /usr/bin/time -l ./msl my.fasta.
The output was:
-finished 2000000 seqs.
562.864 seconds to load the array.
- looked at 2000000 seqs.
368.734 seconds to lookup the k-mers.
1046.94 real 942.38 user 78.96 sys
11680514048 maximum resident set size
So, the program used 11680514048 bytes = 11.68GB of RAM and it took 368.734 seconds to lookup the k-mers in two million fasta files.
### Results
Below is a plot of the results from each user's answers.
• try std::unordered_map instead of std::map. This should already give you quite a boost. – Peter Menzel Nov 3 '18 at 7:32
• @PeterMenzel, thanks for your comment. What do you think about submitting this comment as an answer? I can implement it and add the results. – conchoecia Nov 3 '18 at 18:41
• You can try std::unordered_map instead of std::map as a simple improvement (hopefully) – Peter Menzel Nov 4 '18 at 9:50
• Can you be specific about which you care about more: compactness or speed? If both are equally important, then there will likely be no answer to this question. – winni2k Nov 9 '18 at 9:26
• YMMV depending on your compiler. It worked for me with g++ Apple LLVM version 10.0.0 (clang-1000.10.44.2). The gen_base function uses a random int between 1-100 as input to select a base. The ranges in the function give it some AT bias, The Poisson distribution makes a bunch of sequences around 59 bp long, but the geometric distribution gives it a long tail of sequence lengths. In this case I was modeling a distribution of some sequences I need to look up kmers for. – conchoecia Nov 21 '18 at 0:47
The question and the accepted answer are not about k-mer data structure at all, which I will explain in detail below. I will first answer the actual question OP intends to ask.
The simplest way to keep k-mers is to use an ordinary hash table. The performance is mostly determined by the hash table library. std::unordered_map in gcc/clang is one of the worst choices because for integer keys, it is very slow. Google dense, ska::bytell_hash_map and ska::flat_hash_map, tsl::robin_map and absl::flat_hash_map are much faster. There are a few libraries that focus on smaller footprint, such as google sparse and sparsepp, but those can be a few times slower.
In addition to the choice of hash table, how to construct the key is critical. For k<=32, the right choice is to encode a k-mer with a 64-bit integer, which will be vastly better than std::string. Memory alignment is also important. In C/C++, as long as there is one 8-byte member in a struct, the struct will be 8-byte aligned on x86_64 by default. Most C++ hash table libraries pack key and value in std::pair. If you use 64-bit keys and 32-bit values, std::pair will be 8-byte aligned and use 16 bytes, even though only 12 bytes are actually used – 25% of memory is wasted. In C, we can explicitly define a packed struct with __attribute__ ((__packed__)). In C++, probably you need to define special key types. A better way to get around memory alignment is to go down to the bit level. For read mapping, for example, we only use 15–23bp seeds. Then we have 18 (=64-23*2) bits left unused. We can use these 18 bits to count k-mers. Such bit-level management is quite common.
The above is just basic techniques. There are a few other tricks. For example, 1) instead of using one hash table, we can use 4096 (=2^12) hash tables. Then we can store 12 bits of k-mer information into the 4096 part. This gives us invaluable 12 bits in each bucket to store extra information. This strategy also simplifies parallel k-mer insertions as with a good hash function, it is rare to insert into two tables at the same time. 2) when most k-mers are unique, the faster way to count k-mers is to put k-mers in an array and then sort it. Sorting is more cache friendly and is faster than hash table lookups. The downside is that sort counting can be memory demanding when most k-mers are highly repetitive.
The other answer is spending considerable (probably the majority of) time on k-mer iteration, not on hash table operations. The program loops through each position on the sequence and then each k-mer position. For an $$L$$-long sequence, this is an $$O(kL)$$ algorithm. It has worse theoretical time complexity than hash table operations, which is $$O(L)$$. Although hash table operations are slow due to cache misses, a factor of k=29 is quite significant. Another issue is that all programs in the question and in the other answer are compiled without -O3. Adding this option brings the bytell_hash_map lookup time from 314s to 34s on my machine.
The C program at the end of my post shows the proper way to iterate k-mers. It is an $$O(L)$$ algorithm with a tiny constant. The program keeps track of both forward and reverse k-mers at the same time and update them with a few bit operations at each sequence position. This echoes my previous comment "You should not reverse complement the whole k-mer". On the same machine, the program looks up k-mers in 5.5s using 792MB RAM at the peak. This 6-fold (=34/5.5) speedup mostly comes from k-mer iteration, given that the hash table library in use is known to have comparable performance to bytell_hash_map.
#include <stdio.h>
#include <stdint.h>
#include "khash.h"
static inline uint64_t hash_64(uint64_t key)
{ // more sophisticated hash function to reduce collisions
key = (~key + (key << 21)); // key = (key << 21) - key - 1;
key = key ^ key >> 24;
key = ((key + (key << 3)) + (key << 8)); // key * 265
key = key ^ key >> 14;
key = ((key + (key << 2)) + (key << 4)); // key * 21
key = key ^ key >> 28;
key = (key + (key << 31));
return key;
}
KHASH_INIT(64, khint64_t, int, 1, hash_64, kh_int64_hash_equal)
unsigned char seq_nt4_table[128] = { // Table to change "ACGTN" to 01234
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 0, 4, 1, 4, 4, 4, 2, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 0, 4, 1, 4, 4, 4, 2, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4
};
static uint64_t process_seq(khash_t(64) *h, int k, int len, char *seq, int is_ins)
{
int i, l;
uint64_t x[2], mask = (1ULL<<k*2) - 1, shift = (k - 1) * 2, tot = 0;
for (i = l = 0, x[0] = x[1] = 0; i < len; ++i) {
int c = (uint8_t)seq[i] < 128? seq_nt4_table[(uint8_t)seq[i]] : 4;
if (c < 4) { // not an "N" base
x[0] = (x[0] << 2 | c) & mask; // forward strand
x[1] = x[1] >> 2 | (uint64_t)(3 - c) << shift; // reverse strand
if (++l >= k) { // we find a k-mer
uint64_t y = x[0] < x[1]? x[0] : x[1];
khint_t itr;
if (is_ins) { // insert
int absent;
itr = kh_put(64, h, y, &absent);
if (absent) kh_val(h, itr) = 0;
tot += ++kh_val(h, itr);
} else { // look up
itr = kh_get(64, h, y);
tot += itr == kh_end(h)? 0 : kh_val(h, k);
}
}
} else l = 0, x[0] = x[1] = 0; // if there is an "N", restart
}
}
#include <zlib.h>
#include <time.h>
#include <unistd.h>
#include "kseq.h"
int main(int argc, char *argv[])
{
khash_t(64) *h;
int i, k = 29;
while ((i = getopt(argc, argv, "k:")) >= 0)
if (i == 'k') k = atoi(optarg);
h = kh_init(64);
for (i = 1; i >= 0; --i) {
uint64_t tot = 0;
kseq_t *ks;
gzFile fp;
clock_t t;
fp = gzopen(argv[optind], "r");
ks = kseq_init(fp);
t = clock();
tot += process_seq(h, k, ks->seq.l, ks->seq.s, i);
fprintf(stderr, "[%d] %.3f\n", i, (double)(clock() - t) / CLOCKS_PER_SEC);
kseq_destroy(ks);
gzclose(fp);
}
kh_destroy(64, h);
return 0;
}
• Thanks for updating your comments. Doesn't 'reduce collisions' in the hash function imply that collisions are possible? Collisions aren't OK for some applications. How often should this happen and under what circumstances? – conchoecia Dec 5 '18 at 12:56
• @conchoecia you are misunderstanding what collisions mean in hash tables. Ordinary hash functions all have collisions. Perfect hash function doesn't, but constructing it is a non-trivial task and requires dedicated libraries. Perfect hash function is not always practical depending on applications. – user172818 Dec 5 '18 at 13:05
• What I think you mean by collisions -> "A collision means that two unique elements produce the same hash value. In this case for kmers, two unique DNA k-mer strings would produce the same hash value. So when counting 3-mers in the dataset AAAT, if AAT and AAA have the same hash value, our data structure would show that the k-mers AAT occurs twice and AAA occurs twice." – conchoecia Dec 5 '18 at 13:20
• @conchoecia Hash table keeps the actual keys, not only their hashes. AAT and AAA are distinct keys and will only be counted once. Collision is something that every standard library has to resolve. Every implementation here has collisions. Please read the wiki page to understand how hash table works. PS: collisions have nothing to do with the results. They only affect performance. – user172818 Dec 5 '18 at 13:30
• The more I go over your code the more I am learning. This is amazing. Thank you so much. – conchoecia Dec 6 '18 at 20:44
|
2021-06-21 12:45:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.234663188457489, "perplexity": 3436.60446520908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488273983.63/warc/CC-MAIN-20210621120456-20210621150456-00420.warc.gz"}
|
https://dsp.stackexchange.com/questions/23901/when-sinusoidal-input-starts-at-n-0-why-are-transient-response-associated-with
|
# When sinusoidal input starts at n=0, why are transient response associated with z-transform poles of digital filter?
In http://www.eng.ucy.ac.cy/cpitris/courses/ECE623/presentations/DSP-LECT-10-11-12.pdf, it says that when sinusoidal input $X(z)$ starts at n=0 (with n<0 having zero input) and the input passes through digitl filter $H(z)$, resulting in output z-transform $Y(z) = H(z)X(z)$, transient response is associated with poles of the filter while steady-state response is associated with the poles of the input. Is this true? If so, how does one prove this?
Yes, that's true if the $\mathcal{Z}$-transform of the input signal has poles on the unit circle (i.e., the input is a constant or a sinusoid, starting at $n=0$), and if the filter is stable, i.e. all its poles are inside the unit circle. If you split up $Y(z)$ by a partial fraction expansion, you get a contribution for each pole. The contributions from the poles of $H(z)$ all decay to zero because we've assumed that the filter is stable, i.e. all its pole magnitudes are smaller than $1$, and consequently, these terms decay exponentially. On the other hand, the contributions from the poles of $X(z)$ do not decay because the corresponding poles lie on the unit circle. This means that after a while, all contributions associated with the poles of $H(z)$ will be very small (practically zero; transients), whereas the contributions from the poles of $X(z)$ remain at a constant amplitude (steady-state response).
|
2020-01-20 11:47:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9516366720199585, "perplexity": 266.5577894160232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598726.39/warc/CC-MAIN-20200120110422-20200120134422-00320.warc.gz"}
|
https://www.physicsforums.com/threads/mechanical-energy-how-to-solve-for-v.1000927/
|
# Mechanical energy- how to solve for v
Homework Statement:
Mechanical energy. Solve for v
Relevant Equations:
1/2mv^2 + mgh
Hi there,
I am doing a mechanical energy question. I think the solution is simple but I'm stuck on an algebra step.
This is the solution but I am really not sure how they have simplified down to Va.
For example I tried to factor out the m resulting in
1/2mv^2=m(1/2va^2+gh)
Then I cancel the m
1/2v^2= 1/2va^2+gh
From here I am not really sure what they did to arrive at the final answer
haruspex
Homework Helper
Gold Member
2020 Award
Homework Statement:: Mechanical energy. Solve for v
Relevant Equations:: 1/2mv^2 + mgh
Hi there,
I am doing a mechanical energy question. I think the solution is simple but I'm stuck on an algebra step.
View attachment 279825
This is the solution but I am really not sure how they have simplified down to Va.
For example I tried to factor out the m resulting in
1/2mv^2=m(1/2va^2+gh)
Then I cancel the m
1/2v^2= 1/2va^2+gh
From here I am not really sure what they did to arrive at the final answer
You can see they got rid of the factors of 1/2. How would you do that?
They also got vA on one side of the equation and everything else on the other , so do that.
You can see they got rid of the factors of 1/2. How would you do that?
They also got vA on one side of the equation and everything else on the other , so do that.
I'm really not sure how they got rid of the factors of 1/2. Can you help me?
Getting VA to one side is easy enough for me. Starting with ½v^2=½VA^2+gh after factoring out m, I would then just subtract gh resulting in 1/2 v^2-gh= 1/2 VA^2.
Still really not sure how they got rid of the halfs. My first thought is to multiply by 2 but that doesn't give me the answer
haruspex
Homework Helper
Gold Member
2020 Award
My first thought is to multiply by 2
It gets rid of the halves, so do it.
With vA alone on one side, what is your equation now?
It gets rid of the halves, so do it.
With vA alone on one side, what is your equation now?
If I multiply by 2 and then sqrt I end up with
VA=√ v^2-gh
I'm just missing the 2gh
haruspex
Homework Helper
Gold Member
2020 Award
If I multiply by 2 and then sqrt I end up with
VA=√ v^2-gh
I'm just missing the 2gh
Then you did not get rid of the halves correctly. Retry that step.
Then you did not get rid of the halves correctly. Retry that step.
It's just clicked! Thank you for the prompts I've got it
|
2021-10-15 21:56:28
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8787016272544861, "perplexity": 811.6055293872017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00552.warc.gz"}
|
https://deepai.org/publication/on-the-convergence-of-adabound-and-its-connection-to-sgd
|
# On the Convergence of AdaBound and its Connection to SGD
Adaptive gradient methods such as Adam have gained extreme popularity due to their success in training complex neural networks and less sensitivity to hyperparameter tuning compared to SGD. However, it has been recently shown that Adam can fail to converge and might cause poor generalization -- this lead to the design of new, sophisticated adaptive methods which attempt to generalize well while being theoretically reliable. In this technical report we focus on AdaBound, a promising, recently proposed optimizer. We present a stochastic convex problem for which AdaBound can provably take arbitrarily long to converge in terms of a factor which is not accounted for in the convergence rate guarantee of Luo et al. (2019). We present a new $O(\sqrt T)$ regret guarantee under different assumptions on the bound functions, and provide empirical results on CIFAR suggesting that a specific form of momentum SGD can match AdaBound's performance while having less hyperparameters and lower computational costs.
## Authors
• 9 publications
• ### Calibrating the Learning Rate for Adaptive Gradient Methods to Improve Generalization Performance
08/02/2019 ∙ by Qianqian Tong, et al. ∙ 0
• ### The Role of Momentum Parameters in the Optimal Convergence of Adaptive Polyak's Heavy-ball Methods
02/15/2021 ∙ by Wei Tao, et al. ∙ 0
02/26/2019 ∙ by Liangchen Luo, et al. ∙ 0
• ### YellowFin and the Art of Momentum Tuning
Hyperparameter tuning is one of the big costs of deep learning. State-of...
06/12/2017 ∙ by Jian Zhang, et al. ∙ 0
• ### Improving Generalization Performance by Switching from Adam to SGD
Despite superior training outcomes, adaptive optimization methods such a...
12/20/2017 ∙ by Nitish Shirish Keskar, et al. ∙ 0
07/18/2021 ∙ by Zhou Shao, et al. ∙ 0
• ### Robust Neural Network Training using Periodic Sampling over Model Weights
Deep neural networks provide best-in-class performance for a number of c...
05/14/2019 ∙ by Samarth Tripathi, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
We consider first-order optimization methods which are concerned with problems of the following form:
minx∈Ff(x) (1)
where is the feasible set of solutions and is the objective function. First-order methods typically operate in an iterative fashion: at each step , the current candidate solution is updated using both zero-th and first-order information about (e.g., and
, or unbiased estimates of each). Methods such as gradient descent and its stochastic counterpart can be written as:
xt+1=ΠF(xt−αt⋅mt) (2)
where is the learning rate at step , is the update direction (e.g., for deterministic gradient descent), and denotes a projection onto . The behavior of vanilla gradient-based methods is well-understood under different frameworks and assumptions ( regret in the online convex framework (Zinkevich, 2003), suboptimality in the stochastic convex framework, and so on).
(Tieleman and Hinton, 2012) and Adam (Kingma and Ba, 2015) propose to compute a different learning rate for each parameter in the model. In particular, the parameters are updated according to the following rule:
xt+1=ΠF(xt−ηt⊙mt) (3)
where are parameter-wise learning rates and denotes element-wise multiplication. For Adam, we have and with , where captures first-order information of the objective function (e.g., in the stochastic setting).
Adaptive methods have become popular due to their flexibility in terms of hyperparameters, which require less tuning than SGD. In particular, Adam is currently the de-facto optimizer for training complex models such as BERT (Devlin et al., 2018) and VQ-VAE (van den Oord et al., 2017).
Recently, it has been observed that Adam has both theoretical and empirical gaps. Reddi et al. (2018) showed that Adam can fail to achieve convergence even in the stochastic convex setting, while Wilson et al. (2017) have formally demonstrated that Adam can cause poor generalization – a fact often observed when training simpler CNN-based models such as ResNets (He et al., 2016). While the theoretical gap has been closed in Reddi et al. (2018) with AMSGrad – an Adam variant with provable convergence for online convex problems – achieving SGD-like performance with adaptive methods has remained an open-problem.
AdaBound (Luo et al., 2019) is a recently proposed adaptive gradient method that aims to bridge the empirical gap between Adam-like methods and SGD, and consists of enforcing dynamic bounds on such that as goes to infinity,
converges to a vector whose components are equal – hence degenerating to SGD. AdaBound comes with a
regret rate in the online convex setting, yielding an immediate guarantee in the stochastic convex framework due to Cesa-Bianchi et al. (2006). Moreover, empirical experiments suggest that it is capable of outperforming SGD in image classification tasks – problems where adaptive methods have historically failed to provide competitive results.
In Section 3, we highlight issues in the convergence rate proof of AdaBound (Theorem 4 of Luo et al. (2019)), and present a stochastic convex problem for which AdaBound can take arbitrarily long to converge. More importantly, we show that the presented problem leads to a contradiction with the convergence guarantee of AdaBound while satisfying all of its assumptions, implying that Theorem 4 of Luo et al. (2019) is indeed incorrect. In Section 4, we introduce a new assumption which yields a regret guarantee without assuming that the bound functions are monotonic nor that they converge to the same limit. Driven by the new guarantee, in Section 5 we re-evaluate the performance of AdaBound on the CIFAR dataset, and observe that its performance can be matched with a specific form of SGDM, whose computational cost is significantly smaller than that of Adam-like methods.
## 2 Notation
For vectors and scalar , we use the following notation: for element-wise division (), for element-wise square root (), for element-wise addition (), for element-wise multiplication (). Moreover, is used to denote the -norm: other norms will be specified whenever used (e.g., ).
For subscripts and vector indexing, we adopt the following convention: the subscript is used to denote an object related to the -th iteration of an algorithm (e.g., denotes the iterate at time step ); the subscript is used for indexing: denotes the -th coordinate of . When used together, precedes : denotes the -th coordinate of .
## 3 AdaBound’s Arbitrarily Slow Convergence
AdaBound is given as Algorithm 1, following (Luo et al., 2019). It consists of an update rule similar to Adam, except for the extra element-wise clipping operation , which assures that for all . The bound functions are chosen such that is non-decreasing, is non-increasing, and , for some . It then follows that , thus AdaBound degenerates to SGD in the time limit.
In (Luo et al., 2019), the authors present the following Theorem:
###### Theorem 1.
(Theorem 4 of Luo et al. (2019)) Let and be the sequences obtained from Algorithm 1, , for all and . Suppose , , as , as , and . Assume that for all and for all and . For generated using the AdaBound algorithm, we have the following bound on the regret
RT≤D2∞√T2(1−β1)d∑i=1^η−1T,i+D2∞2(1−β1)T∑t=1d∑i=1β1tη−1t,i+(2√T−1)R∞G221−β1 (4)
Its proof claims that follows from the definition of in AdaBound, a fact that only generally holds if for all . Even for the bound functions considered in Luo et al. (2019) and used in the released code, this requirement is not satisfied for any . Finally, it is also possible to show that AMSBound does not meet this requirement either, hence the proof of Theorem 5 of Luo et al. (2019) is also problematic.
It turns out that the convergence of AdaBound in the stochastic convex case can be arbitrarily slow, even for bound functions that satisfy the assumptions in Theorem 1:
###### Theorem 2.
For any constant , and initial step size , there exist bound functions such that , , , and a stochastic convex optimization problem for which the iterates produced by AdaBound satisfy for all .
###### Proof.
We consider the same stochastic problem as presented in Reddi et al. (2018), for which Adam fails to converge. In particular, a one-dimensional problem over , where is chosen i.i.d. as follows:
ft(x)={Cx,with probability p\coloneqq1+δC+1−x,with probability 1−p (5)
Here, is taken to be large in terms of and , and . Now, consider the following bound functions:
ηl(t)=α/Cηu(t;K)={α/√1−β2,for t≤Kα/C,otherwise (6)
and check that and are non-decreasing and non-increasing in , respectively, and . We will show that such bound functions can be effectively ignored for . Check that, for all :
vt=(1−β2)t∑i=1βt−i2g2i≤C2(1−βt2)≤C2vt=(1−β2)t∑i=1βt−i2g2i≥1−βt2≥1−β2 (7)
where we used the fact that and that . Hence, we have, for :
ηl(t)=αC≤α√vt≤α√1−β2=ηu(t;K) (8)
Since for all , the clipping operation acts as an identity mapping and . Therefore, in this setting, AdaBound produces the same iterates as Adam. We can then invoke Theorem 3 of Reddi et al. (2018), and have that, with large enough (as a function of ), for all , we have . In particular, with , for all and hence . Setting finishes the proof. ∎
While the bound functions considered in the Theorem above might seem artificial, the same result holds for bound functions of the form and , considered in Luo et al. (2019) and in the publicly released implementation of AdaBound:
###### Claim 1.
Theorem 2 also holds for the bound functions and with
γ=1K⋅min(αC−α,√1−β2α) (9)
###### Proof.
Check that, for all :
ηl(t;K)≤1−1γK+1≤1−1αC−α+1=1−C−αC=αCηu(t;K)≥1+1γK≥1+α√1−β2≥α√1−β2 (10)
Hence, for the stochastic problem in Theorem 2, we also have that for all . ∎
Note that it is straightforward to prove a similar result for the online convex setting by invoking Theorem 2 instead of Theorem 3 of Reddi et al. (2018) – this would immediately imply that Theorem 1 is incorrect. Instead, Theorem 2 was presented in the convex stochastic setup as it yields a stronger result, and it almost immediately implies that Theorem 1 might not hold:
###### Corollary 1.
There exists an instance where Theorem 1 does not hold.
###### Proof.
Consider AdaBound with the bound functions presented in Theorem 2 and . For any sequence drawn for the stochastic problem in Theorem 2, setting and in Theorem 1 yields, for :
RK=K∑t=1(ft(xt)−ft(x∗))≤2dC√Kα+(2√K−1)αC2√1−β2 (11)
where we used the fact that . Pick large enough such that . Taking expectation over sequences and dividing by :
1KK∑t=1E[f(xt)]−f(x∗)<0.01 (12)
However, Theorem 2 assures for all , raising a contradiction. ∎
Note that while the above result shows that Theorem 1 is indeed incorrect, it does not imply that AdaBound might fail to converge.
## 4 A New Guarantee
The results in the previous section suggest that Theorem 1 fails to capture all relevant properties of the bound functions. Although it is indeed possible to show that , it is not clear whether a regret rate can be guaranteed for general bound functions.
It turns out that replacing the previous requirements on the bound functions by the assumption that for all suffices to guarantee a regret of :
###### Theorem 3.
Let and be the sequences obtained from Algorithm 1, , for all and . Suppose and for all . Assume that for all and for all and . For generated using the AdaBound algorithm, we have the following bound on the regret
RT≤D2∞2(1−β1)[2dM(√T−1)+d∑i=1[η−11,i+T∑t=1β1tη−1t,i]]+(2√T−1)R∞G221−β1 (13)
###### Proof.
We start from an intermediate result of the original proof of Theorem 4 of Luo et al. (2019):
###### Lemma 1.
For the setting in Theorem 3, we have:
RT≤T∑t=112(1−β1t)[∥η−1/2t⊙(xt−x∗)∥2−∥η−1/2t⊙(xt+1−x∗)∥2]S1+T∑t=1β1t2(1−β1t)∥η−1/2t⊙(xt−x∗)∥2S2+(2√T−1)R∞G221−β1 (14)
###### Proof.
The result follows from the proof of Theorem 4 in Luo et al. (2019), up to (but not including) Equation 6. ∎
We will proceed to bound and from the above Lemma. Starting with :
S1=d∑i=1T∑t=112(1−β1t)[η−1t,i(xt,i−x∗i)2−η−1t,i(xt+1,i−x∗i)2]≤d∑i=1[η−11,i(x1,i−x∗i)22(1−β11)+T∑t=2⎡⎣η−1t,i2(1−β1t)−η−1t−1,i2(1−β1(t−1))⎤⎦(xt,i−x∗i)2]≤d∑i=1[η−11,i(x1,i−x∗i)22(1−β11)+T∑t=2[η−1t,i−η−1t−1,i]2(1−β1(t−1))(xt,i−x∗i)2]≤d∑i=1[η−11,i(x1,i−x∗i)22(1−β11)+T∑t=2[√tηl(t)−√t−1ηu(t−1)](xt,i−x∗i)22(1−β1(t−1))]≤d∑i=1[η−11,i(x1,i−x∗i)22(1−β11)+T∑t=21√t[tηl(t)−t−1ηu(t−1)](xt,i−x∗i)22(1−β1(t−1))]≤d∑i=1[η−11,i(x1,i−x∗i)22(1−β11)+T∑t=2M√t⋅(xt,i−x∗i)22(1−β1(t−1))]≤D2∞2(1−β1)d∑i=1[η−11,i+2M(√T−1)]=D2∞2(1−β1)[2dM(√T−1)+d∑i=1η−11,i] (15)
In the second inequality we used , in the third the definition of along with the fact that for all and , in the fifth the assumption that , and in the sixth we used the bound on the feasible region, along with and for all .
For , we have:
S2=d∑i=1T∑t=112(1−β1t)β1tη−1t,i(xt,i−x∗i)2≤D2∞2(1−β1)d∑i=1T∑t=1β1tη−1t,i (16)
where we used the bound on the feasible region, and the fact that for all .
Combining (15) and (16) into (14), we get:
RT≤D2∞2(1−β1)[2dM(√T−1)+d∑i=1[η−11,i+T∑t=1β1tη−1t,i]]+(2√T−1)R∞G221−β1 (17)
The above regret guarantee is similar to the one in Theorem 4 of Luo et al. (2019), except for the term which accounts for assumption introduced. Note that Theorem 3 does not require to be non-increasing, to be non-decreasing, nor that .
It is easy to see that the assumption indeed holds for the bound functions in Luo et al. (2019):
###### Proposition 1.
For the bound functions
ηl(t)=1−1γt+1ηu(t)=1+1γt (18)
if , we have:
tηl(t)−t−1ηu(t−1)≤3+2γ−1 (19)
###### Proof.
First, check that and . Then, we have:
(20)
In the first inequality we used for all , and in the last the fact that for all and , which is equivalent to . ∎
With this in hand, we have the following regret bound for AdaBound:
###### Corollary 2.
Suppose , , and for in Theorem 3. Then, we have:
RT≤5√T1−β1(1+γ−1)(dD2∞+G22) (21)
###### Proof.
From the bound in Theorem 3, it follows that:
(22)
In the first inequality we used the facts that , and that . In the second, that . In the third, that . In the fourth, we used the bound on from Proposition 1. ∎
It is easy to check that the previous results also hold for AMSBound (Algorithm 3 in Luo et al. (2019)), since no assumptions were made on the point-wise behavior of .
###### Remark 1.
Theorem 3 and Corollary 2 also hold for AMSBound.
## 5 Experiments on AdaBound and SGD
Unfortunately, the regret bound in Corollary 2 is minimized in the limit , where AdaBound immediately degenerates to SGD. To inspect whether this fact has empirical value or is just an artifact of the presented analysis, we evaluate the performance of AdaBound when training neural networks on the CIFAR dataset (Krizhevsky, 2009) with an extremely small value for parameter.
Note that was used for the CIFAR results in Luo et al. (2019), for which we have and
after only 3 epochs (
iterations per epoch for a batch size of ), hence we believe results with considerably smaller/larger values for are required to understand its impact on the performance of AdaBound.
We trained a Wide ResNet-28-2 (Zagoruyko and Komodakis, 2016) using the same settings in Luo et al. (2019) and its released code 111https://github.com/Luolc/AdaBound, version 2e928c3: , a weight decay of , a learning rate decay of factor 10 at epoch 150, and batch size of . For AdaBound, we used the author’s implementation with , and for SGD we used
. Experiments were done in PyTorch.
To clarify our network choice, note that the model used in Luo et al. (2019) is not a ResNet-34 from He et al. (2016), but a variant used in DeVries and Taylor (2017), often referred as ResNet-34. In particular, the ResNet-34 from He et al. (2016) consists of 3 stages and less than 0.5M parameters, while the network used in Luo et al. (2019) has 4 stages and around 21M parameters. The network we used has roughly 1.5M parameters.
Our preliminary results suggest that the final test performance of AdaBound is monotonically increasing with – more interestingly, there is no significant difference throughout training between and (for the latter, we have and ).
To see why AdaBound with behaves so differently than SGDM, check that the momentum updates slightly differ between the two: for AdaBound, we have:
mt=β1mt−1+(1−β1)gt (23)
while, for the implementation of SGDM used in Luo et al. (2019), we have:
mt=β1mt−1+(1−κ)gt (24)
where is the dampening factor. The results in Luo et al. (2019) use , which can cause to be larger by a factor of compared to AdaBound. In principle, setting in SGDM should yield dynamics similar to AdaBound’s as long as is not extremely small.
Figure 1 presents our main empirical results: setting causes noticeable performance degradation compared to in AdaBound, as Corollary 2 might suggest. Moreover, setting in SGDM causes a dramatic performance increase throughout training. In particular, it slightly outperforms AdaBound in terms of final test accuracy ( against , average over 5 runs), while being comparably fast and consistent in terms of progress during optimization.
We believe SGDM with (which is currently not
the default in either PyTorch or Tensorflow) might be a reasonable alternative to adaptive gradient methods in some settings, as it also requires less computational resources: AdaBound, Adam and SGDM’s updates cost
, and float operations, respectively, and their memory costs are , and . Moreover, AdaBound has 5 hyperparameters (), while SGDM with has only 2 (). Studying the effectiveness of ‘dampened’ SGDM, however, requires extensive experiments which are out of the scope of this technical report.
Lastly, we evaluated whether performing bias correction on the form of SGDM affects its performance. More specifically, we divide the learning rate at step by a factor of
. We observed that bias correction has no significant effect on the average performance, but yields smaller variance: the standard deviation of the final test accuracy over 5 runs decreased from
## 6 Discussion
In this technical report, we identified issues in the proof of the main Theorem of Luo et al. (2019), which presents a regret rate guarantee for AdaBound. We presented an instance where the statement does not hold, and provided a regret guarantee under different – and arguably less restrictive – assumptions. Finally, we observed empirically that AdaBound with a theoretically optimal indeed yields superior performance, although it degenerates to a specific form of momentum SGD. Our experiments suggest that this form of SGDM (with a dampening factor equal to its momentum) performs competitively to AdaBound on CIFAR.
### Acknowledgements
We are in debt to Rachit Nimavat for proofreading the manuscript and the extensive discussion, and thank Sudarshan Babu and Liangchen Luo for helpful comments.
## References
• N. Cesa-Bianchi, A. Conconi, and C. Gentile (2006) On the generalization ability of on-line learning algorithms. IEEE Trans. Inf. Theor.. Cited by: §1.
• J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) BERT: pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805. Cited by: §1.
• T. DeVries and G. W. Taylor (2017)
Improved regularization of convolutional neural networks with cutout
.
arXiv:1708.04552. Cited by: §5.
• J. Duchi, E. Hazan, and Y. Singer (2011) Adaptive subgradient methods for online learning and stochastic optimization. ICML. Cited by: §1.
• K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. CVPR. Cited by: §1, §5.
• D. P. Kingma and J. Ba (2015) Adam: a method for stochastic optimization. ICLR. Cited by: §1.
• A. Krizhevsky (2009) Learning multiple layers of features from tiny images. Technical report Cited by: §5.
• L. Luo, Xiong, Yuanhao, Liu, Yan, and Xu. Sun (2019) Adaptive gradient methods with dynamic bound of learning rate. ICLR (arXiv:1902.09843). Cited by: On the Convergence of AdaBound and its Connection to SGD, §1, §1, §3, §3, §3, §3, §4, §4, §4, §4, §4, §5, §5, §5, §5, §6, Theorem 1.
• S. J. Reddi, S. Kale, and S. Kumar (2018) On the convergence of adam and beyond. ICLR. Cited by: §1, §3, §3.
• T. Tieleman and G. Hinton (2012) Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude. Note:
COURSERA: Neural Networks for Machine Learning
Cited by: §1.
• A. van den Oord, O. Vinyals, and K. Kavukcuoglu (2017) Neural Discrete Representation Learning. arXiv:1711.00937. Cited by: §1.
• A. C. Wilson, R. Roelofs, M. Stern, N. Srebro, and B. Recht (2017) The Marginal Value of Adaptive Gradient Methods in Machine Learning. NIPS. Cited by: §1.
• S. Zagoruyko and N. Komodakis (2016) Wide residual networks. BMVC. Cited by: §5.
• M. Zinkevich (2003) Online convex programming and generalized infinitesimal gradient ascent. ICML. Cited by: §1.
|
2021-09-24 02:40:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.877286970615387, "perplexity": 1043.447818008133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057496.18/warc/CC-MAIN-20210924020020-20210924050020-00589.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/a-ball-whirled-circle-attaching-it-fixed-point-string-there-angular-rotation-ball-about-its-centre-angular-momentum-case-rotation-about-fixed-axis_66914
|
Department of Pre-University Education, KarnatakaPUC Karnataka Science Class 11
# A Ball is Whirled in a Circle by Attaching It to a Fixed Point with a String. is There an Angular Rotation of the Ball About Its Centre? - Physics
Short Note
Sum
A ball is whirled in a circle by attaching it to a fixed point with a string. Is there an angular rotation of the ball about its centre? If yes, is this angular velocity equal to the angular velocity of the ball about the fixed point?
#### Solution
Yes, there is an angular rotation of the ball about its centre.
Yes, angular velocity of the ball about its centre is same as the angular velocity of the ball about the fixed point.
Explanation:-
Let the time period of angular rotation of the ball be T.
Therefore, we get
Angular velocity of the ball about the fixed point = $\frac{2\pi}{T}$
After one revolution about the fixed centre is completed, the ball has come back to its original position. In this case, the point at which the ball meets with the string is again visible after one revolution. This means that it has undertaken one complete rotation about its centre.
The ball has taken one complete rotation about its centre. Therefore, we have
Angular displacement of the ball = $2\pi$
Time period = T
So, angular velocity is again $\frac{2\pi}{T}.$
Thus, in both the cases, angular velocities are the same.
Is there an error in this question or solution?
#### APPEARS IN
HC Verma Class 11, 12 Concepts of Physics 1
Chapter 10 Rotational Mechanics
Short Answers | Q 4 | Page 192
|
2021-04-16 14:23:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7073208689689636, "perplexity": 581.6370599956715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066981.0/warc/CC-MAIN-20210416130611-20210416160611-00247.warc.gz"}
|